----------------------------------------------- Intel(R) MPI Library 3.2 Update 2 for Linux* OS Release Notes ----------------------------------------------- -------- Contents -------- - Overview - What's New - Key Features - System Requirements - Installation Notes - Documentation - Special Features and Known Issues - Technical Support - Disclaimer and Legal Information -------- Overview -------- The Intel(R) MPI Library for Linux* OS is a multi-fabric message passing library based on ANL* MPICH2* and OSU* MVAPICH2*. The Intel(R) MPI Library for Linux* OS implements the Message Passing Interface, version 2 (MPI-2) specification. To receive technical support and updates, you need to register your Intel Software Product. See section Technical Support. Product Contents ---------------- The Intel(R) MPI Library Runtime Environment (RTO) contains the tools you need to run programs including MPD daemons and supporting utilities, shared (.so) libraries, and documentation. The Intel(R) MPI Library Development Kit (SDK) includes all of the Runtime Environment components plus compilation tools including compiler commands (mpicc, mpiicc, etc.), include files and modules, static (.a) libraries, debug libraries, trace libraries, and test codes. Related Products and Services ----------------------------- Information on Intel(R) software development products is available at http://www.intel.com/software/products. Some of the related products include: - The Intel(R) Software College provides training for developers on leading-edge software development technologies. Training consists of online and instructor-led courses covering all Intel(R) architectures, platforms, tools, and technologies. ---------- What's New ---------- The Intel(R) MPI Library 3.2 Update 2 for Linux* OS is an update release of the Intel(R) MPI Library for Linux* OS. This release includes the following updates compared to the Intel(R) MPI Library 3.2 Update 1 (see product documentation for more details): - Performance enhancements o Up to 1.5x mpirun and mpdboot out-of-the-box performance improvement o Plus up to 3x faster startup through the mpdboot --parallel-startup option o Up to 10x improved file I/O performance for - Panasas* ActiveScale File System (PanFS) - Parallel Virtual File System, Version 2 (Pvfs2) through the I_MPI_EXTRA_FILESYSTEM and I_MPI_EXTRA_FILESYSTEM_LIST environment variables - Usability improvements o Up to 16384 simultaneously created communicators (earlier 1024) - Extended interoperability o Intel(R) Compiler 11.1 Update 3 o SuSE* Linux Enterprise Server 11 support - Deprecated features o DAPL* 1.1 standard-compliant provider library/driver o The following compiler driver options are deprecated: -compile-info -compile_info -link-info -link_info The Intel(R) MPI Library 3.2 Update 1 release includes the following updates compared to the Intel(R) MPI Library 3.2 (see product documentation for more details): - Performance enhancements o Collective optimization o Scalable mpdboot startup through the -b and --parallel-startup options - Usability improvements o Linux* Standard Base (LSB) compliant RPMs o ILP64 support through the -ilp64 option o Process pinning improvement - Extended interoperability o Intel(R) Compiler 11.0 and higher support The Intel(R) MPI Library 3.2 for Linux* OS includes the following new features compared to the Intel(R) MPI Library 3.1 (see product documentation for more details): - Performance enhancements o Automatic application-specific performance tuning through the mpitune utility o Simplified selection of IPoIB for sock and ssm communication through the I_MPI_NETMASK variable o Faster process startup thanks to disabled Python compatibility check o Faster RDMA and RDSSM wait mode through the I_MPI_RDMA_WRITE_IMM variable o Further optimized Alltoall, Alltoallv, Allreduce, Gather, Scatter, and Bcast collective operations o Greater scalability for the sock and ssm devices - Usability improvements o Advanced shared memory segment size control o Flexible OS, Python*, compiler,and DAPL* compatibility check control o LD_LIBRARY_PATH prioritization over built-in -rpath setting in compiler drivers o Loadable 3rd party process manager libraries - Extended interoperability o Intel(R) Compiler 11.0 support o DAPL* 2.0 support ------------ Key features ------------ This release of the Intel(R) MPI Library supports the following major features: - MPI-1 and MPI-2 specification conformance with some limitations. See section Special Features and Known Issues. - Support for any combination of the following interconnection fabrics: o Shared memory o RDMA-capable network fabrics via DAPL*, such as InfiniBand* and Myrinet* o Sockets, for example, TCP/IP over Ethernet*, Gigabit Ethernet, and other interconnects - (SDK only) Support for IA-32, Intel(R) 64 and Itanium(R) 2 architecture clusters using: o Intel(R) C++ Compiler for Linux* OS version 9.1 through 11.0 and higher o Intel(R) Fortran Compiler for Linux* OS version 9.1 through 11.0 and higher o GNU* C, C++ and Fortran 95 compilers -(SDK only) C, C++, Fortran 77 and Fortran 90 language bindings -(SDK only) Dynamic or static linking ------------------- System Requirements ------------------- The following sections describe supported hardware and software. Supported Hardware ------------------ Systems based on the IA-32 architecture: Intel(R) Pentium(R) 4 processor or higher Intel(R) Core(TM) i7 processor recommended 1 GB of RAM per core 2 GB of RAM per core recommended 1 GB of free hard disk space Systems based on the Intel(R) 64 architecture: Intel(R) Core(TM)2 processor family or higher Intel(R) Xeon(R) 5500 processor series recommended 1 GB of RAM per core 2 GB of RAM per core recommended 1 GB of free hard disk space Itanium(R) 2 based systems: Intel(R) Itanium(R) processor 9000 sequence recommended 1 GB of RAM per core 2 GB of RAM per core recommended 1 GB of free hard disk space Supported Software ------------------ Operating Systems: Systems based on the IA-32 architecture: Red Hat* Enterprise Linux* 4.0, or Red Hat Enterprise Linux 5.0, or SuSE* Linux Enterprise Server* 9, or SuSE Linux Enterprise Server 10 Systems based on the Intel(R) 64 architecture: Red Hat Enterprise Linux 4.0, or Red Hat Enterprise Linux 5.0, or Fedora* 7 through 8 or CAOS* 2, or CentOS* 4.6, or CentOS 5.1, or SuSE Linux Enterprise Server 9, or SuSE Linux Enterprise Server 10, or SuSE Linux Enterprise Server 11, or openSuSE* Linux* 10.3 Itanium(R) 2 based systems: Red Hat Enterprise Linux 4.0, or Red Hat Enterprise Linux 5.0, or SuSE Linux Enterprise Server 9, or SuSE Linux Enterprise Server 10, or SuSE Linux Enterprise Server 11 (SDK only) Compilers: GNU*: C, C++, Fortran 77 3.3 or higher, Fortran 95 4.0 or higher or Intel(R) C++ Compiler for Linux* OS 9.1, 10.0, 10.1, 11.0, 11.1 and higher Intel(R) Fortran Compiler for Linux* OS 9.1, 10.0, 10.1, 11.0, 11.1 and higher (SDK only) Supported Debuggers: Intel(R) Debugger 9.1-23 and higher Totalview Technologies* TotalView* 6.8 and higher Allinea* DDT* v1.9.2 and higher GNU* Debuggers Batch Systems: Platform* LSF* 6.1 and higher Altair* PBS Pro* 7.1 and higher OpenPBS* Torque* 1.2.0 and higher Parallelnavi* NQS* for Linux* OS V2.0L10 and higher Parallelnavi for Linux* OS Advanced Edition V1.0L10A and higher NetBatch* v6.x and higher SLURM* 1.2.21 and higher Sun* Grid Engine* 6.1 and higher Recommended InfiniBand Software: - OpenFabrics* Enterprise Distribution (OFED*) 1.3.1 or higher. Additional Software: - Python* 2.2 or higher, including the python-xml module. Python* distributions are available for download from your OS vendor or at http://www.python.org (for Python* source distributions). - An XML parser such as expat or pyxml. - If using InfiniBand*, Myrinet*, or other RDMA-capable network fabrics, a DAPL* 1.1 or DAPL* 1.2 standard-compliant provider library/driver is required. DAPL* providers are typically provided with your network fabric hardware and software. (SDK only) Supported Languages ------------------------------ For GNU* compilers: C, C++, Fortran 77, Fortran 95 For Intel(R) compilers: C, C++, Fortran 77, Fortran 90, Fortran 95 ------------------ Installation Notes ------------------ See the Intel(R) MPI Library for Linux* OS Installation Guide for details. ------------- Documentation ------------- The Intel(R) MPI Library for Linux* OS Getting Started Guide, found in Getting_Started.pdf, contains information on the following subjects: - First steps using the Intel(R) MPI Library for Linux* OS - Troubleshooting outlines first-aid troubleshooting actions The Intel(R) MPI Library for Linux* OS Reference Manual, found in Reference_Manual.pdf, contains information on the following subjects: - Command Reference describes commands, options, and environment variables - Tuning Reference describes environment variables that influence library behavior and performance The Intel(R) MPI Library for Linux* OS Installation Guide, found in INSTALL.html, contains information on the following subjects: - Obtaining, installing, and uninstalling the Intel(R) MPI Library - Getting technical support --------------------------------- Special Features and Known Issues --------------------------------- - The Intel(R) MPI Library Development Kit package is layered on top of the Runtime Environment package. See the Intel(R) MPI Library for Linux* OS Installation Guide for more details. - The default installation path for the Intel(R) MPI Library has changed to /opt/intel/impi/3.2.2 The installer, if necessary, will establish a symbolic link from the expected default RTO location to the actual RTO or SDK installation location. - The Intel(R) MPI Library automatically places consecutive MPI processes onto all processor cores. Use the mpiexec -perhost 1 option or set the I_MPI_PERHOST environment variable to 1 in order to obtain the round robin process placement. - The Intel(R) MPI Library pins processes automatically. Use the I_MPI_PIN and related environment variables to control process pinning. See the Intel(R) MPI Library for Linux* OS Reference Manual for more details. - The Intel(R) MPI Library provides thread safe libraries up to level MPI_THREAD_MULTIPLE. The default level is MPI_THREAD_FUNNELED (SDK only) Follow the rules: o Use the Intel(R) MPI compiler driver -mt_mpi option to build a thread safe MPI application. o Do not load thread safe Intel(R) MPI libraries through the dlopen(3). - To run a mixed Intel MPI/OpenMP* application do the following: o Use the thread safe version of the Intel(R) MPI Library by using the -mt_mpi compiler driver option o Set I_MPI_PIN_DOMAIN to select desired process pinning scheme. The recommended setting is I_MPI_PIN_DOMAIN=omp o I_MPI_PIN_DOMAIN does not affect the Itanium(R) 2 based SGI* Altix* systems o See the Intel(R) MPI Library for Linux* OS Reference Manual for more details. - Intel(R) MKL 10.0 may create multiple threads depending on various conditions. Follow the rules to correctly use Intel(R) MKL: o (SDK only) Use thread safe version of the Intel(R) MPI Library in conjunction with Intel(R) MKL by using the -mt_mpi compiler driver option o Set the OMP_NUM_THREADS environment variable to 1 to run application linked with non-thread safe version of the Intel(R) MPI Library - The Intel(R) MPI Library uses dynamic connection establishment by default for 64 and more processes. To always establish all connections upfront, set the I_MPI_DYNAMIC_CONNECTION environment variable to "disable". - The Intel(R) MPI Library compiler drivers embed the actual Development Kit library path (default /opt/intel/impi/.) and default Runtime Environment library path /opt/intel/mpi-rt/. into the executables using the -rpath linker option. - Use the LD_PRELOAD environment variable to preload the appropriate Intel(R) MPI binding library to start an MPICH2 Fortran application in the Intel(R) MPI Library environment. - The Intel(R) MPI Library enhances message-passing performance on DAPL*-based interconnects by maintaining a cache of virtual-to-physical address translations in the MPI DAPL* data transfer path. Set the environment variable LD_DYNAMIC_WEAK to "1" if your program dynamically loads the standard C library before dynamically loading the Intel(R) MPI Library. Alternatively, use the environment variable LD_PRELOAD to load the Intel(R) MPI Library first. To disable the translation cache completely, set the environment variable I_MPI_RDMA_TRANSLATION_CACHE to "disable". Note that you do not need to set the aforementioned environment variables LD_DYNAMIC_WEAK or LD_PRELOAD when you disable the translation cache. - (SDK only) Always link the standard libc libraries dynamically if you use the RDMA or RDSSM devices to avoid possible segmentation faults. It is safe to link the Intel(R) MPI Library statically in this case. Use the -static_mpi option of the compiler drivers to link the libmpi library statically. This option does not affect the default linkage method for other libraries. - Certain DAPL* providers may not work with the Intel(R) MPI Library for Linux* OS, for example: o Voltaire* GridStack*. Contact Voltaire*, or download an alternative OFED* DAPL* provider at http://www.openfabrics.org. o Qlogic* QuickSilver Fabric*. Set the I_MPI_DYNAMIC_CONNECTION_MODE variable to disconnect as a workaround, Contact Qlogic*, or download an alternative OFED* DAPL* provider at http://www.openfabrics.org. o Myricom* DAPL* provider. Contact Myricom* or download alternative DAPL* provider at http://sourceforge.net/projects/dapl-myrinet. The alternative DAPL* provider for Myrinet* supports both the GM* and MX* interface. - GM DAPL* provider may not work with the Intel(R) MPI Library for Linux* OS using some versions of GM* drivers. Set I_MPI_RDMA_RNDV_WRITE=1 to avoid this issue. - Certain DAPL* providers may not function properly if your application uses system(3), fork(2), vfork(2), or clone(2) system calls. Do not use these system calls or functions based upon them. For example, system(3), with: o OFED* DAPL* provider with Linux* kernel version earlier than official version 2.6.16. Set the RDMAV_FORK_SAFE environment variable to enable the OFED workaround with compatible kernel version. - The Intel(R) MPI Library does not support heterogeneous clusters of mixed architectures and/or operating environments. - The Intel(R) MPI Library requires Python* 2.2 or higher for process management. - The Intel(R) MPI Library requires the python-xml* package or its equivalent on each node in the cluster for process management. For example, the following OS does not have this package installed by default: o SuSE Linux* OS Enterprise Server 9 - The Intel(R) MPI Library requires the expat* or pyxml* package, or an equivalent XML parser on each node in the cluster for process management. - The following MPI-2 features are not supported by the Intel(R) MPI Library: o Process spawning and attachment - If installation of the Intel(R) MPI Library package fails and shows the error message: "Intel(R) MPI Library already installed" when a package is not actually installed, try the following: 1. Determine the package number that the system believes is installed by typing: # rpm -qa | grep intel-mpi This command returns an Intel(R) MPI Library . 2. Remove the package from the system by typing: # rpm -e 3. Re-run the Intel(R) MPI Library installer to install the package. TIP: To avoid installation errors, always remove the Intel(R) MPI Library packages using the uninstall script provided with the package before trying to install a new package or reinstall an older one. - Due to an installer limitation, avoid installing earlier releases of the Intel(R) MPI Library packages after having already installed the current release. It may corrupt the installation of the current release and require that you uninstall/reinstall it. - Certain operating system versions have a bug in the rpm command that prevents installations other than in the default install location. In this case, the installer does not offer the option to install in an alternate location. - If the mpdboot command fails to start up the MPD, verify that the Intel(R) MPI Library package is installed in the same path/location on all the nodes in the cluster. To solve this problem, uninstall and re-install the Intel(R) MPI Library package while using the same path on all nodes in the cluster. - If the mpdboot command fails to start up the MPD, verify that all cluster nodes have the same Python* version installed. To avoid this issue, always install the same Python* version on all cluster nodes. - Presence of environment variables with non-printable characters in user environment settings may cause the process startup to fail. To work around this issue, the Intel(R) MPI Library does not propagate environment variables with non-printable characters across the MPD ring. - A program cannot be executed when it resides in the current directory but "." is not in the PATH. To avoid this error, either add "." to the PATH on ALL nodes in the cluster or use the explicit path to the executable or ./ in the mpiexec command line. - The Intel(R) MPI Library 2.0 and higher supports PMI wire protocol version 1.1. Note that this information is specified as pmi_version = 1 pmi_subversion = 1 instead of pmi_version = 1.1 as done by the Intel(R) MPI Library 1.0. - The Intel(R) MPI Library requires the presence of the /dev/shm device in the system. To avoid failures related to the inability to create a shared memory segment, make sure the /dev/shm device is set up correctly. - (SDK only) Certain operating systems use GNU* compilers version 4.2 or higher that is incompatible with Intel(R) Compiler 9.1. Use Intel(R) Compiler 10.1 or later on the respective operating systems, for example: o SuSE Linux Enterprise Server 11 - (SDK only) Certain GNU* C compilers may generate code that leads to inadvertent merging of some output lines at runtime. This happens when different processes write simultaneously to the standard output and standard error streams. In order to avoid this, use the -fno-builtin-printf option of the respective GNU*compiler while building your application. - (SDK only) Certain versions GNU* LIBC library define free()/realloc() symbols as non-weak. Use the ld --allow-multiple-definition option to link your application. - (SDK only) A known exception handling incompatibility exists between GNU C++ compilers version 3.x and version 4.x. Use the special -gcc-version= option for the compiler drivers mpicxx and mpiicpc to link an application when running in a particular GNU* C++ environment. The valid values are: o 320 if GNU* C++ version is 3.2.x o 330 if GNU* C++ version is 3.3.x o 340 if GNU* C++ version is 3.4.x o 400 if GNU* C++ version is 4.0.x o 410 if GNU* C++ version is 4.1.x or 4.2.x A library compatible with the detected version of the GNU* C++ compiler is used by default. Do not use this option if the gcc version is older than 3.2. - (SDK only) The Fortran 77 and Fortran 90 tests in the /test directory may produce warnings when compiled with the mpif77, etc. compiler commands. You can safely ignore these warnings, or add the -w option to the compiler command line to suppress them. - (SDK only) In order to use GNU Fortran compiler version 4.0 and higher use the mpif90 compiler driver. - (SDK only) A known module file format incompatibility exists between the GNU Fortran 95 compilers. Use Intel(R) MPI Library mpif90 compiler driver to automatically uses the appropriate MPI module. - (SDK only) Perform the following steps to generate bindings for your compiler that is not directly supported by the Intel(R) MPI Library: 1. Go to the binding directory # cd /binding 2. Extract the binding kit # tar -zxvf intel-mpi-binding-kit.tar.gz 3. Follow instructions in the README-intel-mpi-binding-kit.txt - (SDK only) In order to use Intel(R) Debugger set the IDB_HOME environment variable. It should point to the location of the Intel(R) Debugger. - The Eclipse* PTP 1.0 GUI process launcher is not available on Itanium(R) 2 based platforms. - (SDK only) Use the following command to launch an Intel MPI application with Valgrind* 3.3.0: # mpiexec -n <# of processes> valgrind \ --leak-check=full --undef-value-errors=yes \ --log-file=.%p \ --suppressions=/etc/valgrind.supp where: .%p - log file name for each MPI process - the Intel MPI Library installation path - name of the executable file ----------------- Technical Support ----------------- Your feedback is very important to us. To receive technical support for the tools provided in this product and technical information including FAQ's and product updates, you need to register for an Intel(R) Premier Support account at the Registration Center. This package is supported via Intel(R) Premier Support. Direct customer support requests at: https://premier.intel.com General information on Intel(R) product-support offerings may be obtained at: http://www.intel.com/software/products/support The Intel(R) MPI Library home page can be found at: http://www.intel.com/go/mpi The Intel(R) MPI Library support web site, http://www.intel.com/software/products/support/mpi/ provides top technical issues, frequently asked questions, product documentation, and product errata. Requests for licenses can be directed to the Registration Center at: http://www.intel.com/software/products/registrationcenter Before submitting a support issue, see the Intel(R) MPI Library for Linux* OS Getting Started Guide for details on post-install testing to ensure that basic facilities are working. When submitting a support issue to Intel(R) Premier Support, please provide specific details of your problem, including: - The Intel(R) MPI Library package name and version information - Host architecture (for example, IA-32 or Itanium(R) architecture) - Compiler(s) and versions - Operating system(s) and versions - Specifics on how to reproduce the problem. Include makefiles, command lines, small test cases, and build instructions. Use /test sources as test cases, when possible. You can obtain version information for the Intel(R) MPI Library package in the file mpisupport.txt. Submitting Issues ----------------- - Go to https://premier.intel.com - Log in to the site. Note that your username and password are case-sensitive. - Click on the "Submit Issue" link in the left navigation bar. - Choose "Development Environment (tools,SDV,EAP)" from the "Product Type" drop-down list. If this is a software or license-related issue, choose the Intel(R) MPI Library, Linux* from the "Product Name" drop-down list. - Enter your question and complete the fields in the windows that follow to successfully submit the issue. Note: Notify your support representative prior to submitting source code where access needs to be restricted to certain countries to determine if this request can be accommodated. -------------------------------- Disclaimer and Legal Information -------------------------------- The Intel(R) MPI Library is based on MPICH2* from Argonne National Laboratory* (ANL) and MVAPICH2* from Ohio State University* (OSU). -------------------------------------------------------------------------------- INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL(R) PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or by visiting Intel's Web Site. Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. See http://www.intel.com/products/processor_number for details. BunnyPeople, Celeron, Celeron Inside, Centrino, Centrino Atom, Centrino Atom Inside, Centrino Inside, Centrino logo, Core Inside, FlashFile, i960, InstantIP, Intel, Intel logo, Intel386, Intel486, IntelDX2, IntelDX4, IntelSX2, Intel Atom, Intel Atom Inside, Intel Core, Intel Inside, Intel Inside logo, Intel. Leap ahead., Intel. Leap ahead. logo, Intel NetBurst, Intel NetMerge, Intel NetStructure, Intel SingleDriver, Intel SpeedStep, Intel StrataFlash, Intel Viiv, Intel vPro, Intel XScale, Itanium, Itanium Inside, MCS, MMX, Oplus, OverDrive, PDCharm, Pentium, Pentium Inside, skoool, Sound Mark, The Journey Inside, Viiv Inside, vPro Inside, VTune, Xeon, and Xeon Inside are trademarks of Intel Corporation in the U.S. and other countries. * Other names and brands may be claimed as the property of others. Copyright(C) 2003-2009, Intel Corporation. All rights reserved.