This section of the wiki concerns the operations of the CMS department.

Currently, the main server facility for CMS is located in 1T1 Annenberg, building number 16 on the campus map.

The access list for 1T1 (people with card key access) consists of:
  • David Leblanc, x2402
  • Patrick Cahalan, x3290
  • Jeri Chittum, x6251
Special access for networking work includes
  • Joe Monaly, x2109
The historical access list can be found below.

The CMS systems administration team is located in 112 Annenberg.

Design Considerations

The Annenberg Server Room was designed based upon the historical usage of compute resources by the (then-CS) CMS faculty, projected out to 2025 using the EPA's projected guideline for future power consumption trends. Barring a cutover to DC power supplies or some other major change in rackmounted server power supplies, this represents a conservative but fiscally responsible baseline for the first 15 years of occupation of the building by the department. Any major computing effort that exceeds the design parameters of the room will have to be accomodated elsewhere on campus.

Cooling

The server room is supplied by a large (8") chilled water feed that enters the Annenberg building at grade on the north side of the building near the auditorium, travels up into the ceiling at that point, cuts across the northern hallway and into the janitorial closet (1J1), where it drops below the floor and enters the server room. The emergency shut-off valves for this chilled water pipe are located in 1J1; in the event of a catastropic failure of pipe integrity, the shutoff valves must be engaged in that room. The key to the janitorial closet is located in 112 Annenberg.

The server room consists of four rows of 5 self-enclosed cabinets with internal cooling sidecars manufactured by Rittal corporation, sharing a common plenum. One of the four rows has four cooling units, the remaining three rows have five cooling units. The row with four cooling units has a capacity of approximately 22kW per cabinet, the rows with five cooling units have a capacity of approximately 28kW per cabinet. Since the rows share a common plenum, it is possible to exceed the base capacity as long as the other cabinets in the row are under the top capacity and the differential is not so great as to impact the airflow in the over-subscribed cabinet. In any event, it is not expected that CMS will reach heat load capacity in any cabinet in the immediate future.

There is also an in-room CRAC unit for comfort cooling, a Leibert unit that is supplied by the main chilled water feed.

Power

The server room is supplied power via a dedicated 750kVA transformer (T2) in the Main Electrical room (room 1E2, first floor Annenberg, south-east corner of the building). Access to this room is restricted to campus facilities, in the event of an emergency call x4717. In the Main Electrical room, there are feeds to the server room which connect to the large electrical panel in the southeast corner of the server room.

There are four outgoing circuits from that room, which attach to the wall-mounted step-down Emerson PDUs (1, 2, and 3) and the in-room CRAC unit. Essentially, each PDU receives 225kVA @ 480A, where it is stepped-down to the 208 circuits that are connected to the individual power strips (the Eaton Rack PDUs described more fully below) in each cabinet.

With the exception of cabinet 1.3, which houses the critical infrastructure, none of these circuits is backed by uninterupptable power backup. Cabinet 1.3 has an in-cabinet 12kW 60A UPS, which is described more fully below.

Network

Public Network

There are 144 public network drops in 1T1, which are supplied via a patch panel on the western wall which is tied into the building switchgear. These 144 drops are currently on the 131.215.140.0/24 and 131.215.141.0/24 subnets, but all machines in 1T1 should be configured to operate on the 140 network, at which point the 141 overlay will be removed. There is also a backnet in the server room, which supplies network connectivity to the Eaton Rack PDUs and Rittal Cabinet controllers for the purpose of monitoring and managing the physical power and colling infrastructure.

Private Network

There is a private network backnet in 1T1, originally for monitoring of the physical infrastructure. This network is supported by two 48-port 100 mbs switches. Currently, the configuration for the private network is as follows:

Cabinet 1.1 is 10.1.1.0 - 10.1.1.255 Cabinet 1.2 is 10.1.2.0 - 10.1.2.255 Cabinet 1.3 is 10.1.3.0 - 10.1.3.255 ... Cabinet 2.1 is 10.2.1.0 - 10.2.1.255 Cabinet 2.2 is 10.2.2.0 - 10.2.2.255 ... Cabinet 3.1 is 10.3.1.0 - 10.3.1.255

Generally, a cabinet with the identifying number N.M will possess the private address space 10.N.M.1 - 10.N.M.255.

For every cabinet (except 4.5, which has no LCP unit attached), the LCP unit is assigned the 254 address on the corresponding network. Thus, the LCP numbered 3.1 is 10.3.1.254, the LCP numbered 4.3 is 10.4.3.254, etc. The rack-mounted power distribution units have the .253 and (if applicable) the .252 address on their respective subnet.

Any host that is connected to the backnet in any given cabinet will thus be assigned an IP address in the range of 10.N.M.2 - 10.N.M.250, which allows us to reserve the 10.N.M.1 address for a gateway (if applicable in the future).

The bastion host that access the backnet, in cabinet 1.3, will be 10.1.3.1

Initially, all devices on the backnet will be on a single subnet, and thus use 255.255.0.0 as their broadcast address, with the bastion host serving as a gateway.

The 1T1 room is described more fully in the attached documents:

Power Outages

Liebert CRAC Unit

Rittal LCP-Server Cabinets

Emerson PDU Cabinets

Eaton Rack PDUs

Quotes and Invoices

-- DavidLeBlanc - 2019-10-17
Topic revision: r1 - 2019-10-17, DavidLeBlanc
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding CMS Wiki? Send feedback