$resultHtml
Skip to end of metadata
Go to start of metadata

Space Index

0-9 ... 0 A ... 5 B ... 2 C ... 10 D ... 3 E ... 2
F ... 3 G ... 2 H ... 6 I ... 10 J ... 0 K ... 1
L ... 1 M ... 12 N ... 1 O ... 1 P ... 6 Q ... 0
R ... 7 S ... 8 T ... 7 U ... 10 V ... 14 W ... 2
X ... 0 Y ... 0 Z ... 0 !@#$ ... 0    

0-9

A

Page: Abaqus
Abaqus is a finite element based analysis code which is useful for multi-physics modelling and simulation. Here are the steps for running the Abaqus in #serial and in #parallel on HPC. This commercial software requires license tokens to run. The formula f
Page: Access Control on NFSv4
Setting read (r), write (w), and/or execute (x) bits on files and directories with chmod will suffice for most users of the HPCC. However, for more complex access control scenarios, the HPCC has implemented the Network File System version 4 (NFSv4). Users
Page: Adding a Private Key to Your Mac OSX Keychain
On Mac OSX, the native SSH client can use the built-in keychain directly. To add your private key to the keychain simply use the command: ssh-add -K /path/of/private/key As an example if your private key is stored at ~/.ssh and is named id_rsa, you would
Page: Advanced Scripting Using PBS Environment Variables
Your job submission script has a number of environment variables that can be used to help you write some more advanced scripts. These variables can make your code more portable and save you time. Variables listed by functionality The following list are so
Page: Advanced Specification of Resources
Specialized Hardware If you require GPUs for your computation, please add the feature flag and the resource request flag #PBS -l nodes=4:ppn=2:gpus=2 #PBS -l feature=gpgpu (specify the number of GPUs per node that is required). Similarly, to use request t

B

Page: Bioinformatics Software Tutorials
HPCC Tutorials Specific examples related to running Bioinformatics tools on the HPCC, created by HPCC staff. ABySS - using parallel and serial versions of the ABySS assembler Using Velvet and Oases - information for using effectively on the HPCC BLAST wit
Page: Buy-In Account information
Accessing Buy-in Reservations Add the "-A accountname" argument https://wiki.hpcc.msu.edu/display/hpccdocs/Job+Properties+%28PBS+script+and+qsub+options%29 to qsub or in your job script (with the #PBS). Your accountname is a short name like 'ged' or 'mada

C

Page: CentOS 7.3
This container is available after first loading the 'Singluarity' module, then the 'centos/7.3' module. $ module load singularity $ module load centos/7.3 This is a general purpose container to use for programs that require newer libraries than are availa
Page: Check Point with DMTCP
NOTE: To run DMTCP correctly, the limit of stack size could not be "unlimited". To set it to a number n, run "ulimit -s n". For example, "ulimit -s 8190" would set the stack size as 8190. User could also add "ulimit -s 8190" into .bashrc file. DMTCP
Page: Check-pointing with BLCR
"Check-pointing" is a technique for stopping a process, saving it's state to disk in a way that the process can be restarted. Some Application include this feature, and we encourage users of the HPCC developing their own code to consider implementing ch
Page: Cluster Statistics
Showstats The command showstats shows general information about running, queued, and eligible jobs on the system. [ongbw@dev-gfx11 ~]$ showstats moab active for 6:05:48:06 stats initialized on Sun Aug 12 17:52:57 2012 Eligible/Idle Jobs: 156/757 (20.608%)
Page: Code Development
The following resources are available to aid software development at MSU. Code Development
Page: Compiling a Windows XP Executable (for condor)
obtain an account for our condor build server Please fill out a request on this form http://www.hpcc.msu.edu/contact for an account on our windows build server. Logging on to the condor build server Connect to gateway.hpcc.msu.edu. A graphical interface i
Page: Compiling C,C++ and Fortran Mex Files
Sometimes running MATLAB programs can be very slow or users need to link MATLAB code to existing c++ and Fortran Libraries. In these cases, MATLAB provides an interface to allow specially written C++ and Fortran code to run as MATLAB functions. These func
Page: Condor Support at MSU
What is Condor? Condor (http://research.cs.wisc.edu/htcondor/ http://research.cs.wisc.edu/htcondor/) is a specialized resource management software that enables researchers at MSU to leverage idle computers for their compute-intensive jobs. At MSU, the com
Page: Connecting to the HPCC
Accessing HPCC from campus or home To access MSU's HPC system, open a secure shell (SSH) connection to our gateway node, hpcc.msu.edu. In Linux, Unix or Mac OSX, simply type "ssh -X username@hpcc.msu.edu" in a terminal window. Users running OS X 10.8.x an
Page: Creating a Python virtualenv for msmbuilder
Post created by Aaron Beckett (iCER Student Intern) This tutorial provides step by step instructions for creating a Python virtual environment to installing msmbuilder. For more general information regarding python virtual environments on the hpcc, see th

D

Page: Description of the Processing Hardware
HPCC maintains a number of sub-clusters purchased at different times. The names of each of the sub-clusters is a composite of the main hardware architecture and the year the sub-cluster was purchased. All of the nodes in the main cluster run the same
Page: DIYABC
DIYABC (Do It Yourself Approximate Bayesian Computation) v0.7.2 and v1.0.4.37 is currently installed on HPC. A detailed "How to use DIYABC (v0.7) on the HPCC" by Jeanette McGuire is listed below (10/11/2010) This document will help you run a program (DIYA
Home page: Documentation and User Manual
The HPCC is migrating to a new operating system and scheduler. For more information, see "Introduction to new 2018 HPC systems https://wiki.hpcc.msu.edu/x/SAIzAQ". For the new 2018 system, please refer to the documentation here https://wiki.hpcc.msu.edu/d

E

Page: Estimating the start time of a job.
The command showstart shows estimations on when a job should start. Example Usage: showstart -e all PBS_JOBID If showq shows the job is still eligible but not yet active, you can check when the job will start by using the showstart command with the -e all
Page: Evaluation Gateway and Nodes
Note: as of June 2016, the previous evaluation gateways and nodes from 2010 and 2011 with NVIDIA Tesla cards are permanently off-line. However the new "Laconia" cluster installed June 2016 has several nodes with NVIDIA Tesla K80. See the System confi

F

Page: Files as Semaphores
A Semaphore is a flag designed to restrict access to shared resources. Any time a program wants to use a resource it must first set the semaphore flag. If the flag is already set, the process needs to wait around until the flag is cleared. In parallel sys
Page: Flash File System (ffs17)
Many researchers must use applications that generate large numbers of small files, and need a file system that can be accessed from multiple nodes at the same time. The Lustre file system is excellent for fast access to large files but is not suited for h
Page: FLUENT
FLUENT is a computational fluid dynamics (CFD) solver that provides a wide array of advanced physical models for fluid flow and heat transfer applications including multiphase flow. Running FLUENT on an Interactive node If you need to test your job you c

G

Page: General guidelines for which file systems to use
We employ parallel file system software (Lustre) for /mnt/scratch. There are four storage servers (OSSes) that service /mnt/scratch. Each storage server has access to storage targets (OSTs) which are a set of several hard drives that can be written and re
Page: GPU Computing
Interactive Login Nodes name Processors Cores Memory accelerators dev-intel14-k20 dual socket 2.5 Ghz 10-core Intel Xeon E5-2670v2 20 128 GB two Nvidia K20 Kepler Cards dev-intel16-k80 dual socket 2.4 Ghz 14-core Intel Xeon E5-2680v4 28 256 GB eight Nvidi

H

Page: How to run job arrays larger than 1000
We have a system limit in place that prevents a single user from submitting more than 1000 jobs. Although users suggested to increase this limit, it is in place to prevent users from flooding the scheduling system. However, as with any limitation it can b
Page: HPCC Advanced Topics
@self
Page: HPCC Basics
The High Performance Computing Center (HPCC) @ MSU manages shared computing resources consisting of clusters http://wiki.hpcc.msu.edu/display/hpccdocs/Documentation+and+User+Manual#DocumentationandUserManual-SystemConfiguration and development nodes https
Page: HPCC File Systems
HPCC provides five types of file storage. They are referred to here as HOME, RESEARCH, SCRATCH, LOCAL and RAMDISK. This article addresses the differences between these file storage systems from a hardware and software point of view. The primary usag
Page: HPCC Powertools
The HPCC has put together a set of tools to help advanced users use the system more effectively. Most of these tools where designed by HPC staff to help them with their work and are not actively supported. If there is a problem, users can submit a request
Page: HPCC Quick Reference Sheet
This Quick Reference sheet is designed for people already familiar with the HPCC system at MSU. It has been specifically designed to focus on settings specific to MSU HPCC and help users who use many different HPC systems keep track of the differences. Us

I

Page: icc
Example Usage: icc -03 mysource.cpp Description: The Intel C compilers are optimized for processing on the 64 bit systems. Recommended Advanced Not recommended Option Description -O3 Enable aggressive optimizations -openmp Enable the compiler to gener
Page: iCER Products and Services (Buy-in and Storage)
iCER is pleased to offer hardware buy-in options for MSU researchers. Please see http://icer.msu.edu/users/buy-options http://icer.msu.edu/users/buy-options for information about purchasing priority access or storage.
Page: Importing Sequence Data into Galaxy
Galaxy Data Import To import data into Galaxy, the most effective manner is to follow the steps for dataset import outlined below. To do this, you will need: 1) An HPCC account (request an HPCC account https://contact.icer.msu.edu/account) 2) Log-in to G
Page: Index of Video Tutorials
Page: Information for Central Michigan University and Western Michigan University Users
This information is for users at Central Michigan University and Western Michigan University. Users with MSU NetIDs should disregard this information. Contacting the HPCC for Help. CMU Users: Please email cmichhelp@hpcc.msu.edu mailto:cmichhelp@hpcc.msu.e
Page: Installation of LAMMPS
Building LAMMPS "Building LAMMPS can be non-trivial." -- First sentence of installation instructions in LAMMPS manual. These instructions target the September 5th 2014 version of LAMMPS and will probably not be compatible with older versions. If you are a
Page: Installed Software
HPCC has an extensive list of software installed. To use a piece of software, an appropriate module must be loaded. This page reviews how to use modules before providing a list of installed software. Software Specific Tutorials Modules A module manages en
Page: Installing an X-server for Macs
Video Tutorial - Mac software installation instructions - XQuartz If you are running OS X 10.8 or later (including the latest vesion), you will need to install the X11 program XQuartz (an xserver) . See https://www.xquartz.org/ https://www.xquartz.org/
Page: Installing an X-server on Windows
X-windows is a method for running programs remotely on a Unix/Linux system, especially programs with a graphical user interface (e.g. windowing programs). It's also know as X11 or simply "X." The program MobaXterm available for Windows includes all the
Page: Installing Local Perl Modules with CPAN
CPAN http://www.cpan.org/ is a convenient way to build and install perl modules, but many people have difficulty knowing how to do this if they lack "root" permissions. This tutorial will demonstrate how to install Perl modules to a local user space usin

J

K

Page: Kettering Users
This information is for users at Kettering University. Users with MSU NetIDs should disregard this information. 1) Request a Community ID You will need to request a Community ID at the following link: https://community.idm.msu.edu/selfservice/ https://com

L

Page: LAMMPS
LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is available on HPCC. To use it, you should first load the Intel compiler suite. module swap GNU Intel The LAMMPS software environment module can then be loadedwith the following command

M

Page: Managing Jobs
Listing of all your jobs To see a list of your submitted jobs, type qstat -u username [ongbw@dev-gfx11]$ qstat -u ongbw cmgr01: Req'd Req'd Elap Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time -------------------- -------- -------- -------
Page: Managing Perl modules
To manage your Perl environment, we recommend using perlbrew, http://metacpan.org/module/perlbrew http://metacpan.org/module/perlbrew. This allows you to create one or more self-contained Perl installations which live in your local $HOME directory, and al
Page: Mapping HPC drives to a campus computer with SMB
The following tutorial will show you how to map your HPC home or research directory using CIFS File Sharing http://en.wikipedia.org/wiki/Cifs. This page has been updated (8-8-17) to reflect new samba settings. Please review carefully. If your home directo
Page: Mapping HPC drives to a campus computer with SMB Legacy
This method is no longer used please see Mapping HPC drives with Samba The following tutorial will show you how to map your HPC home or research directory using CIFS File Sharing http://en.wikipedia.org/wiki/Cifs. This will only work if your computer has
Page: Mapping HPC drives with SSHFS
Beside mapping HPCC with SMB, HPCC users can enable mounting of HPCC file systems with SSHFS https://code.google.com/archive/p/win-sshfs/ on a local computer. Different from SMB which can only map to home or research space via MSU campus network, this sof
Page: Mathematica
The Mathematica software from Wolfram Research is available on the HPCC systems. We currently provide Mathematica, version 8.0. Most of this documentation was written for Mathematica 7.1, but should also be valid for 8.0 as well. When you load the environ
Page: MATLAB
About MATLAB Various versions of MATLAB http://www.mathworks.com/products/matlab/ are installed on the cluster. By default, MATLAB R2014a is loaded. Other available version of MATLAB can be discovered by typing hpc@dev-amd09:~> module avail MATLAB and the
Page: MATLAB Compiler mcc
The MATLAB compiler is available to hpcc users. Compiled MATLAB codes are advantageous because MATLAB licenses are not required during runtime, and can potentially run faster. To compile MATLAB programs, or to run compiled MATLAB programs, users need to l
Page: MATLAB Licenses
MATLAB and all of its toolboxes uses a license server to manage the available licenses. The following is a list of the current licenses available on HPCC: Toolbox Number of Licenses MATLAB 45 SIMULINK 10 Bioinformatics_Toolbox 5 Database_Toolbox 10 Fuzzy_
Page: Monitoring a job
When you use the qsub command, a job will go though many states before it is complete. This tutorial is designed to give you an idea of what states the job goes though and how to get information about a job in each state. A flowchart is provided to help s
Page: move2bot
Note: Job priorities can only be decreased, not increased. Note: This cannot be used on array jobs. Note: This script can be used to move a job to the bottom of the queue once (see below). Note: To restore your jobs to prior priority use use the reset
Page: move2top
Note: Job priorities can only be decreased, not increased. Note: This cannot be used on array jobs. Note: This script can be used to move a job to the bottom of the queue once (see below). Note: To restore your jobs to prior priority use use the reset

N

Page: NumPy and SciPy
NumPy http://numpy.scipy.org/ is currently available for Python 2.7.2 on the HPCC system. To use NumPy: module load NumPy This will ensure that the correct version of Python is loaded along with the Intel Math Kernel Library (MKL http://software.intel.co

O

Page: Oakland University Users
This information is for users at Oakland University. Users with MSU NetIDs should disregard this information. Oakland University users should refer to the documentation hosted on the OU knowledge base. https://kb.oakland.edu/uts/MSU_iCER_HPCC_Research_Clu

P

Page: Page Index
Page: Parallel profiling with Scalasca
Scalasca is a profiler capable of measuring and analyzing parallel program behavior during execution. This wiki serves as a basic user guide and will be updated periodically with more information and tips. Tutorial under development. Please contact iCER
Page: Per-Node CPU and Memory Layout
On modern architectures, understanding the relationship between processors, their cache, memory can make a significant difference in performance. Here are the logical layouts of our main cluster systems, which were generated using hwloc. intel11 intel11.p
Page: Perl
@self
Page: Python
@self
Page: Python Scripting
Writing Python scripts incrementally For this tutorial, you'll need to make sure you are logged into a dev-node. Python 2.7.2 is loaded natively by default. We’ve been talking a lot about scripts, and walking through some of them. Now we’ll show you how

Q

R

Page: R
Various versions of R are available at HPCC. Definition from the R-project website (http://www.r-project.org/ http://www.r-project.org/): "R is a language and environment for statistical computing and graphics. It is a GNU project http://www.gnu.org/ whic
Page: R: using parallel packages
Preparation: Basic knowledge of R language, Linux, and HPCC environment. Log in: ssh -XY class0@hpcc.msu.edu mailto:class0@hpcc.msu.edu (replace class0 with your own account), and then log in to a dev-node (e.g., ssh dev-intel14). Type xclock to make sur
Page: Requesting HPCC Accounts
To obtain (free) HPCC account(s), a faculty member must request accounts by filling out http://contact.icer.msu.edu/account http://contact.icer.msu.edu/account. Information required to complete the form includes a list of names and NetIDs of research grou
Page: Resource Managment and Job Scheduler
Moab Torque.jpg The High Performance Computing cluster is a shared resource, and while you may log-in to the development node at any time and run your programs for < 2 hrs, to run a program on the cluster you must wait in a queue. To do so, one writes a
Page: Restoring files from backup
Backup restoration is currently disabled due to a bug in ZFS that can cause a Kernel Panic which will cause a filer to become unresponsive. When this bug is patched we will once again allow users to access their backups. If you need files restored please
Page: RStudio
This container is available after loading the 'singularity' module, then the 'rstudio/1.0.143' module. $ module load singularity $ module load rstudio/1.0.143 RStudio uses the X11 graphical display system that is common in the Unix world. To see the displ
Page: Running Programs Interactively
In some cases using the scheduling system is not practical and users need to run a job interactively. Jobs that need to be run interactively typically require a lot of user input and run a graphical user interface (GUI). Some examples of typical interacti

S

Page: Scheduling Interactive Jobs
Both command-line interface (CLI) and graphical user interface (GUI) interactive jobs may be scheduled on the HPCC systems. By using the scheduler, you can run longer than the CPU time limits imposed on the development nodes. Scheduling a Command-Line or
Page: Scheduling Jobs
The High Performance Computing Center (HPCC) @ MSU manages shared computing resources consisting of clusters http://wiki.hpcc.msu.edu/display/hpccdocs/Documentation+and+User+Manual#DocumentationandUserManual-SystemConfiguration and development nodes https
Page: Singularity (CentOS6)
OBSOLETE DOCUMENTATION: THIS IS FOR THE OLD CENTOS6 SYSTEM AND NOT ACCURATE please see the new documentation on Singularity on CentOS7 https://wiki.hpcc.msu.edu/display/ITH/Singularity Singularity allows users to run software inside of containers. These c
Page: Software
@self
Page: Software Specific Tutorials
The following is a list of tutorials put together by iCER staff to help with some of the more common software on the system. @self
Page: SSH Key-Based Authentication
Typically, when someone uses a SSH client, that person needs to type a password for each new connection started. This can become bothersome if one is frequently making new connections or is in a situation where others may be physically present when the pa
Page: Stata
Many versions of Stata are install on the ICER HPC. When you log-in, by default Stata SE version 15 is available. You can use it immediately. Stata has a command line version and a GUI (windowed) version. To use the command line, type stata at the
Page: System Information
system.jpg Users can use MSU's HPCC resources by first connecting to gateway.hpcc.msu.edu gateway and rsync gateway are the only two nodes directly accessible to the internet. The gateway node is not meant for running software, connecting to scratch space

T

Page: TAU
Overview TAU (Tuning and Analysis Utilities) is a toolkit that can measure the parallel performance of OpenMPI programs written in C, C++, and Fortran. TAU allows you to analyze and track the performance of individual processes. Depending on the level i
Page: TensorFlow
Note: On our new CentOS7 system these methods are not necessary. Please see TensorFlow page in our CentOS7 docs. @self
Page: TensorFlow with LinuxBrew environment
These instructions are obsolete Note that we are migrating the system to CentOS 7 and we recommend installing TensorFlow using that system. See 2018 Environment Update and Migration and once you are using the new system, read our CentOS7 TensorFlow docum
Page: TensorFlow-GPU using Singularity Containers
This refers to our older CentOS6 nodes. For new documentation please see High Performance Computing at iCER Background TensorFlow is a popular Machine Learning package from Google that includes Binaries that can be called from Python. The software can t
Page: Transferring Files to the HPCC
The HPCC is migrating to a new operating system and scheduler. For more information, see "Introduction to new 2018 HPC systems https://wiki.hpcc.msu.edu/x/SAIzAQ". For the new 2018 system, please see File Transfer and File Storages documentation This docu
Page: Transferring large files to and from the HPCC
If you need to share large files with a collaborator, there are several ways to do so. Transferring a few files on a one time basis If you need to transfer a few files to a collaborator off campus, MSU provides a resource for this. FileDepot http://techb
Page: Tutorials
Overview Using the HPCC Computational Resources This tutorial explains how to get connected and start using the HPCC computational resources for your research. It is divided into the following sections: @self Software Specific Tutorials There are also oth

U

Page: Useful HPCC Commands
The following is a list of commonly used commands that are available on the HPCC systems. To learn how to use most of these commands, type man <command_name> in the command line for one of the login nodes (i.e. dev-intel14). If the man page is unavailable
Page: Using $TMPDIR on local disk for your jobs
Background The HPCC file systems for Home directories, shared research spaces and our Scratch disk system are connected via the HPCC network to all nodes in the cluster ( see HPCC File Systems Overview ). The big advantage to this, of course, is that once
Page: Using GPGPU Devices in Mathematica
Mathematica 8.0 has the capability to use GPGPU devices. Setting up Mathematica for Use with GPGPU Devices To develop and test notebooks which take advantage of GPGPU computing in Mathematica, you should login to the dev-gfx10 development node. After you
Page: Using Interactive Jobs
Page: Using Mathematica in Batch Mode
Mathematica can be used to run computations in a non-interactive manner. This requires preparing a Mathematica script ahead of time. The script is basically equivalent the input lines of a Mathematica notebook. Several output functions or operators must u
Page: Using Mathematica Interactively
You can run an interactive Mathematica session on the special node for interactive jobs, or you can use the batch scheduler to set aside a certain number of dedicated cores for you to use interactively. You can also choose between using the text console u
Page: Using Multiple Cores in Mathematica
Mathematica supports automatic parallelization. Up to four cores will be used per license in use. The Mathematica computation engine is called MathKernel. With the default configuration, one such kernel is started along with a user interface. When a paral
Page: Using Python virtualenv on the HPCC
Python has a lot of packages, modules and libraries that researchers may want to use. However, it is difficult for iCER and the HPCC to keep up with and install all of the different libraries, versions, and conflicts between them. Users who need Python so
Page: Using The Anaconda Python distribution on HPCC
These instructions are obsolete : please see Using conda Python and numeric/analytic libraries can be difficult to install and configure as it requires several C and FORTRAN libraries to be install correctly, and python libraries to be installed in a way
Page: Utilizing HTCONDOR at ICER
If you are having problems submitting your jobs from any of the general dev nodes within the hpcc clusters, please use accumulator.hpcc.msu.edu which is the specific dev node for condor within the hpcc systems. The condor commands are native to your path

V

Page: Video Tutorial - Getting Started using HPCC
(6:30 Minutes) Basic overview for getting an account, installing software and using the HPCC Links in Video Request an account -https://contact.icer.msu.edu/account https://contact.icer.msu.edu/account XQuarts, X11 Server for mac - http://xquartz.macosfor
Page: Video Tutorial - HPCCUSB
(4 minutes) Instructions for using a portable iCER USB drive (available on request). Note: Clear out the cookies in your browser if problems exist with viewing the video. {html} <center> <div id="media"> <object id="csSWF" classid="clsid:d27cdb6e-ae6d
Page: Video Tutorial - Mac software installation instructions - XQuartz
Note: Clear out the cookies in your browser if problems exist with viewing the video. Download Video
Page: Video Tutorial - Map Home directory using MacOS
(2 minutes) How to mount your home directory on a Mac. Note: Clear out the cookies in your browser if problems exist with viewing the video. {html} <center> <div id="media"> <object id="csSWF" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="
Page: Video Tutorial - Map Home directory using Windows
(2 minutes) How to mount your home directory on Windows Note: Clear out the cookies in your browser if problems exist with viewing the video. {html} <center> <div id="media"> <object id="csSWF" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width=
Page: Video Tutorial - MobaXTerm
(7 minutes) How to install and use the MobaXTerm X11 server and network tools on Windows. Note: Clear out the cookies in your browser if problems exist with viewing the video. {html} <center> <div id="media"> <object id="csSWF" classid="clsid:d27cdb6e-a
Page: Video Tutorial - Modules
(4 minutes) Demonstration on how to use the module system to use installed software on the HPCC. Note: Clear out the cookies in your browser if problems exist with viewing the video. {html} <center> <div id="media"> <object id="csSWF" classid="clsid:d27cd
Page: Video Tutorial - MPI
(9 minutes) How to use MPI (Message Passing Interface) on the HPCC. Note: Clear out the cookies in your browser if problems exist with viewing the video. {html} <center> <div id="media"> <object id="csSWF" classid="clsid:d27cdb6e-ae6d-11cf-96b8-44455354
Page: Video Tutorial - Powertools for power users
(6 minutes) Instructions for using powertools to run scripts and examples developed by HPCC staff. Note: Clear out the cookies in your browser if problems exist with viewing the video. {html} <center> <div id="media"> <object id="csSWF" classid="clsid:d
Page: Video Tutorial - Putty
(2 minutes) Instructions for downloading and installing an SSH client on Windows. Note: Clear out the cookies in your browser if problems exist with viewing the video. {html} <center> <div id="media"> <object id="csSWF" classid="clsid:d27cdb6e-ae6d-11cf
Page: Video Tutorial - Submitting a Job on the HPCC
Note: Clear out the cookies in your browser if problems exist with viewing the video. {html} <center> <div id="media"> <object id="csSWF" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="720" height="495" codebase="http://download.macromedia.
Page: Video Tutorial - Windows Example
(6 minutes) Demonstration on how to use Windows to work with the HPCC. Note: Clear out the cookies in your browser if problems exist with viewing the video. {html} <center> <div id="media"> <object id="csSWF" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444
Page: Video Tutorial - WinSCP
(5 minutes) Instructions for downloading and installing an SCP client on Windows. Note: Clear out the cookies in your browser if problems exist with viewing the video. {html} <center> <div id="media"> <object id="csSWF" classid="clsid:d27cdb6e-ae6d-11
Page: Video Tutorial - XMing
(6 minutes) Instructions for downloading and installing an X11 server on Windows. Note: Clear out the cookies in your browser if problems exist with viewing the video. {html} <center> <div id="media"> <object id="csSWF" classid="clsid:d27cdb6e-ae6d-11

W

Page: Windows Software and Installation Instructions
{html}{html}<!-- DO NOT DELETE THIS WIKI PAGE... ATTACHMENTS ARE REQUIRED TO VIEW VIDEOS ON OTHER PAGES-->{html}{html} The following table shows the categories of software that are generally needed to access and use the HPCC: Software Type Required Descri
Page: Working with the NFS automounter
The NFS automounter is used by some of the largest data-intensive sites on the planet. HPCC home directories and group research space are both implemented on the ZFS filesystem, and are mounted on compute nodes with NFS by the Linux automounter. This allo

X

Y

Z

!@#$

  • No labels