IPC Computing Resources

Keeping in mind the ever-growing influence of computers in diverse spheres such as science, technology, and entertainment, the English Press Club decided to take a leaf out of Linus’s book and find out more about the Information Processing Centre (IPC). The IPC serves as the central computing facility of BITS Pilani. It hosts and manages the computing and networking infrastructure of the campus. The infrastructure consists of local and external connectivity, including email as well as other computer services. Most services in the IPC are available from early morning to midnight, with some specialized labs offering around-the-clock computing facilities.

The campus hosts about 1000 desktops and workstations, including 350 in a central location, about a dozen compute servers, multi-terabyte storage, and a variety of other peripherals. These systems support heterogeneous operating environments—both Linux and Windows—and development tools for the students and staff members. 

The campus hosts a state-of-the-art, completely switched, voice-enabled local area network (LAN) with a 1 Gbps fiber-optic cable backbone. The network has more than 5000 wired data ports and provides connectivity to instructional and administrative buildings, hostels, guest houses, the Library, and staff residences. More than 800 access points have been deployed across campus to support Wi-Fi-based connectivity. External internet connectivity is supported through a 3 Gbps leased line. The network support team maintains the network facility; it also resolves the issues related to the facility as well as computing infrastructure through an online portal.

The server room is where the Network Connectivity Devices, computer servers, and their associated components are stored. This room is part of a data centre, which typically houses several physical servers lined up together in different form factors, such as rack-mounted, or in tower or blade enclosures. The IPC server room serves all the general IT requirements of the campus and fulfils computing requirements as needed by specific departments. The network-related hardware is sourced from Cisco and Juniper, while hardware for the servers and related peripherals comes from HP, Dell, and IBM. 

High-Performance Computing (HPC) Facilities

As per the requirements of the problem to be computed, the IPC offers two different computing systems—one built to handle CPU-heavy jobs (the HPC cluster), and the other designed for more graphics/GPU intensive loads (the GPU stack).

The HPC Cluster consists of one head node and twelve compute nodes, each powered by two Intel Xeon CPUs clocked at 2.40GHz, with eight cores per CPU, and 96 GB of RAM. This system also has two GPU nodes. The cluster has a total of 50 TB of storage space.

The GPU stack is powered by four Intel Xeon 4110 servers, assisted by five NVIDIA Tesla GPUs, which are renowned for their prowess at deep-learning applications. 

Computing systems of this calibre produce a lot of heat, and so the entire system is cooled by two water cooling units—one of 17 tons and one of 25 tons—and two 17-ton air conditioning units. The cooling systems are managed by the Estate Management Unit (EMU).

The computing resources are primarily used by the Computer Science, Mechanical, Chemical, Chemistry, Physics, Biology and Math departments for high-performance calculations. Students who require such facilities for project-type courses may approach the IPC operator. The IPC plans to upgrade to the next generation of high-performance computing systems based on the requirements of the faculty, research scholars, and students.