mpi_io - Introduction to the ROMIO implementation of MPI
       I/O routines


DESCRIPTION

       MPI I/O routines are based on MPI-2 MPI I/O routines, in
       which derived data types are used to express data
       partitioning.  These MPI-2 I/O routines are derived from
       the ROMIO 1.0.1 source code.  The current IRIX
       implementation contains all interfaces defined in the
       MPI-2 I/O chapter of the MPI-2 standard except shared file
       pointer functions (Sec. 9.4.4), split collective data
       access functions (Sec. 9.4.5), support for file
       interoperability (Sec. 9.5), I/O error handling (Sec.
       9.7), and I/O error classes (Sec. 9.8). Because shared
       file pointer functions are not supported, the
       MPI_MODE_SEQUENTIAL file access mode to MPI_File_open is
       also not supported.

   Limitations
       The MPI I/O routines have the following limitations:

       *   Beginning with MPT 1.6, the status argument is set
           following all read, write, MPIO_Test, and MPIO_Wait
           functions. Consequently, MPI_Get_count and
           MPI_Get_elements will now work when passed the status
           object from these operations.  Previously, they did
           not.

       *   All nonblocking I/O functions use a MPIO_Request
           object instead of the usual MPI_Request object.
           Accordingly, two functions, MPIO_Test and MPIO_Wait
           are provided to wait and test on these MPIO_Request
           objects. They have the same semantics as MPI_Test and
           MPI_Wait, as shown in the following synopsis:

           int MPIO_Test(MPIO_Request *request, int *flag,
           MPI_Status *status);

           int MPIO_Wait(MPIO_Request *request, MPI_Status *status);

           The usual functions, MPI_Test, MPI_Wait, MPI_Testany,
           and so on, do not work for nonblocking I/O.

       *   All functions return only two possible error codes:
           MPI_SUCCESS on success and MPI_ERR_UNKNOWN on failure.

       *   End-of-file is not detected.  The individual file
           pointer is increased by the requested amount of data
           and not by the actual amount of data read.  Therefore,
           after end-of-file is reached, MPI_File_get_position(3)
           returns a wrong offset.

       MPI I/O supports direct access to files stored in XFS
       filesystems.  Direct access bypasses the system's buffer
       cache, leading to better performance in some specialized
       cases.  You can enable direct I/O for read and write
       operations by setting the appropriate environment
       variable(s), MPIO_DIRECT_READ or MPIO_DIRECT_WRITE, to the
       string "TRUE," as in the following example: setenv
       MPIO_DIRECT_READ TRUE setenv MPIO_DIRECT_WRITE TRUE

   List of Routines
       The MPI I/O routines are as follows:

            MPI_File_c2f(3)
            MPI_File_close(3)
            MPI_File_delete(3)
            MPI_File_f2c(3)
            MPI_File_get_amode(3)
            MPI_File_get_atomicity(3)
            MPI_File_get_byte_offset(3)
            MPI_File_get_group(3)
            MPI_File_get_info(3)
            MPI_File_get_position(3)
            MPI_File_get_size(3)
            MPI_File_get_type_extent(3)
            MPI_File_get_view(3)
            MPI_File_iread(3)
            MPI_File_iread_at(3)
            MPI_File_iwrite(3)
            MPI_File_iwrite_at(3)
            MPI_File_open(3)
            MPI_File_preallocate(3)
            MPI_File_read(3)
            MPI_File_read_all(3)
            MPI_File_read_at(3)
            MPI_File_read_at_all(3)
            MPI_File_seek(3)
            MPI_File_set_atomicity(3)
            MPI_File_set_info(3)
            MPI_File_set_size(3)
            MPI_File_set_view(3)
            MPI_File_sync(3)
            MPI_File_write(3)
            MPI_File_write_all(3)
            MPI_File_write_at(3)
            MPI_File_write_at_all(3)


Man(1) output converted with man2html