S1 Tour Script
Introduction
S1 – Welcome to our Sydney facility, part of our world-class national portfolio of carrier and integrator neutral facilities which also span Brisbane, Melbourne, Canberra and Perth.
Before we get started on the tour, let’s make sure I understand what you are looking to learn whilst you are here so I can ensure I cover those points for you.
Why are you taking a tour of the data centre today?
-
S1 is strategically located 15km from Sydney’s CBD, S1 is easily accessible via road, rail and bus transport.
-
Within S1 we have 4 data halls each with approximately 1,450m2 with a combined 2,800 rack capacity. These 4 data halls are encompassed in an Uptime Institute Tier III rated facility with all the associated power, cooling and security infrastructure necessary to maintain a 100% uptime environment. Today you will see the key components of the facility and I’ll show you how you will manage your own infrastructure within this or any of our other facilities.
-
Customer parking at the front, right-hand entry of the facility can be reserved via our national Service Management Centre.
-
As you may have seen on the way in, S1 is a highly secure premise. There is 2.1m anti-scaling fence securing the perimeter that meets AS1725 standards.
-
There are three entry points: pedestrian entry, car park entry (plus separate exit) and the loading dock for deliveries, all of which are under continual surveillance and secured by biometric swipe card access.
-
The security office is manned 24 hours x 365 days by NEXTDC concierge personnel. The screens in the office display images from approximately 135 security-camera positions located inside and outside the facility. The screens also display news and weather channels so that our team is aware of the external environment and are monitoring for any threats that may require them to be on heightened alert. Examples of threats could include acts of terrorism, industrial accidents or severe storms.
-
All the glazing you can see in this area is bullet-resistant glass manufactured by Gunnebo.
-
From this point all entry to secure areas is via biometric fingerprint readers. Only authorised people can access the data centre, and each individual’s profile only provides them with access to the areas they have been granted the specific role-based security privilege. For example, access is restricted to only the hall, pod and rack where your infrastructure is located, and general chill-out rooms and meetings rooms as defined by their company.
-
The three bullet-resistant ‘mantraps’ (also called air-locks or security portals) protect access to the secure areas of the data centre. Each of these cylindrical portals has a front and a back door. Only one door can be opened at a time, preventing piggybacking and tailgating. They also prevent pass-back – your pass can only enter a person once, so you can’t swipe for another person and then enter again yourself. In addition, if any excessive weight is detected, for instance if there was a second person inside, they would be denied pass-through. We also have a side access or mantrap suitable for disabled access.
While at the lobby – Talk about access
-
24/7 x 365 day permanent access (IDAC): If authorised by your company you will be required to complete a site-specific induction that covers safety and other operational processes. After completing induction you will receive a swipe card, also known as an Identity Access Card (or IDAC), programmed specifically for your level of access.
-
A ‘biometric template’ is what is stored on our IDAC cards. This template is a digital reference of distinct characteristics extracted via the bio-enrolment process. The template is stored only on the card itself and not in our security system.
-
The cards and readers from HID Global we use have multi-layered security protocols and Secure Identity Object (SIO) data binding. This binds a credential to an object (such as a fingerprint to an IDAC) to prevent cloning, which means the biometric template can never be associated with any another card.
-
The cards NEXTDC utilise feature a custom encryption protocol, meaning they cannot be used on any reader whatsoever except for NEXTDC readers within a NEXTDC facility.
-
Visitor / Contractor / Tour Access:Visitors are allowed to enter the facility if they have been registered 48 hours in advance via the Guest Access process. An authorised person is always required to escort them.
-
Permit to Work: An approved Permit to Work must be in place prior to any contractor commencing any potentially hazardous work. Customers must apply for a Permit to Work seven days prior to the planned commencement of work. The Permit to Work form can be submitted through ONEDC® or the NEXTDC Service Management Centre.
Through Gunnebos
Position 2 – Main Boardroom
-
The sound-resistant boardroom is available for customers to use, but needs to be booked in advance. This room will soon support video conferencing and is ideal for important client meetings or staff reviews. These rooms also feature floor-to-ceiling backboards for workshops and planning sessions.
-
We also have another smaller meeting room outside of the Gunnebos near the front door alongside the general waiting area, which can also be booked by customers and partners.
-
Our customers can use these facilities nationally in any of our data centres to host their staff or customer briefings.
Position 3 – The MEP (DRUPS)
-
Entering the building via Eden Park Drive are our three critical utilities: power, water and telecommunication connections.
-
Electrical power is delivered from the public grid via three 11kV feeders, any one of which can fail with no impact to the site. Should the grid fail we have generators on site that I will talk more about later.
-
Water is critical for cooling, and we have two diverse feeders from the public grid in place to ensure that street level outages do not impact normal operation. In case of emergency scenarios we also have four water tanks onsite (with provision for a fifth), providing a combination of rainwater and domestic water storage. Total capacity is currently almost 400,000 litres, ensuring that the data centre can continue to operate for at least 24 hours if the facility were to lose both public feeds.
Position 4 – Walk to south corridor first
-
Here we are in our ‘Hunt for Red October’ corridor, very similar in look and feel to our M1 data centre.
-
S1 has three high-voltage feeds from the Ausgrid Macquarie Park substation. Each feed operates at 11,000 Volts and provides 7.5 megavolt-amperes (MVA) at maximum capacity. As a High Voltage customer of Ausgrid, NEXTDC manage the high-voltage switching within S1.
-
Power enters the building via rooms specifically built for Ausgrid and which are under its sole control. From there the HV passes to NEXTDC’s control via our HV switch room. From the HV switch room it will travel to the 13 ultimate transformers, each of which convert the 11,000 Volts to 400 Volts. This is the level at which the Diesel Rotary Uninterruptible Power Supply units operate, known as DRUPS, and all electrical distribution is built on, for example, the power into the data halls.
-
Behind each of these red doors is one of our DRUPS units, including a dedicated diesel generator for each. The DRUPS is designed and delivered by renowned German manufacturer Piller. Currently we have three operational units, with expansion to five already underway, and the remainder will come online as and when required.
-
We chose this particular DRUPS system because of its efficiency, low environmental impact and ease of maintainability.
-
The DRUPS are deployed in an N+1 configuration, which means that we can perform maintenance without losing any capacity. This is independently certified to Uptime Institute Tier III specifications and is known as concurrent maintainability.
-
The DRUPS replace a more traditional data centre’s battery-based static UPS and back-up generator. They bridge the gap between grid failure and generator start up via an electric motor/generator that is connected to a six-tonne power bridge that houses a flywheel suspended in a helium-filled chamber spinning at 3000rpm when operating at full capacity.
-
When the external mains power is connected, the motor constantly ensures that the flywheel is fully spun up. If mains power is lost the flywheel continues to spin and the motor switches to generator mode and provides power to the data halls based on the kinetic energy stored in the flywheel. Each kinetic flywheel can provide the full load of 1.3 megawatts for up to 15 seconds or so, while the 1.67MVA generator starts and achieves full load. Typically generator support will be achieved within 4.5 seconds of mains power being lost.
-
Each engine is a Perkins V12 continuous-rated diesel engine coupled via a centrifugal clutch that drives the 1.67MVA Uniblock generator. They consume approximately 400 litres of diesel per hour when running at full load. In an emergency we have sufficient diesel stored onsite – 110,000 litres – to operate the data centre (including the cooling plant, the offices and all other systems) for no less than 24 hours. In that situation we would begin refuelling well before the 24 hours is up, in order to extend the site’s operation in generator-mode if required. We have an SLA with our preferred contractor that mandates refuelling within four hours when requested.
Position 5 – Walk to north corridor
-
What’s really exciting about this system is Piller’s unique electrical distribution scheme known as Isolated-Parallel Bus, or ISO-P Bus for short. It was NEXTDC who introduced this system into Australia; our Melbourne facility was the first in the Asia-Pacific region to use it.
-
The ISO-Parallel Bus is located on the Ground Floor MEP and is the brains behind the DRUPS. It aligns all 13 DRUPS in a ring and constantly reviews the phase angles of each DRUPS. Should one suffer performance issues or begins to fail, it automatically adjusts the other DRUPS to increase their output, meaning that any spare capacity can be used anywhere in the data centre if required.
Position 6 – Carrier Rooms (end of the north corridor)
-
S1 has diverse building entries for our telecommunications carriers.
-
We maintain two Telco or carrier rooms at S1 where carriers establish their points of presence within the facility.
-
We have 29 carriers at M1 and expect that most of them will look to establish a POP at S1.
-
Current fibre carriers already in S1 include AAPT; Optus; Vocus; Telstra; Megaport; PIPE Networks; Nextgen Networks; and Uecomm.
Position 7 – Chill-Out Room (on way back to lift)
-
All customers and partners with IDACs have access to the chill-out room. This room has a fully functioning kitchen, flat screen 75-inch TV, lounge, gaming console, two fully reclining massage chairs, Foxtel and free Wi-Fi.
-
The chill-out room provides customers with an area to relax while waiting for software to install or when taking a break from a long session in the data centre.
-
There is also a vending machine that dispenses useful things for the data centre like cage nuts, cable ties and connecting cables, and another for drinks and snacks.
Position 8 – LOWER GROUND – Loading Docks, Storage and Staging rooms
Walk to the end and make your way back
-
(Walking towards the loading dock area on the left) A second security office is also located in the Lower Ground hallway. From this point our staff control access to the loading dock area, shipment receipt, storage and rubbish removal.
-
The design of this area is something NEXTDC has given a lot of thought to – this is the first point for any install and deliveries are often time critical. We recognise that ease of implementation is hugely important.
-
S1 has two loading docks enabling simultaneous deliveries of customer equipment. External door heights are 4.2m. You will see that we have installed a load leveller for larger trucks and deliveries and a scissor lift that can support anything from a UTE upwards.
-
Once a delivery is unloaded, it’s moved through this roller door to our secure area to unpack. The door then closes, allowing the next delivery to take place separately and securely.
-
S1 has dedicated storage space for customer deliveries that require short-term holding. Deliveries of unattended equipment, if pre-booked via our SMC, will be managed by Security Operations, who tag the delivery and move it to the storeroom. A record is made of the delivery and a ticket is raised in the ONEDC® portal to notify the customer that their delivery has arrived.
-
To the left is the waste room for all cardboard, plastic and rubbish. As both a fire prevention measure and dust control, no cardboard or plastic is allowed past this point. Even though we provide for rubbish removal, we do encourage our customers to remove their packaging.
-
(Walking back towards the lifts on the right) We have three staging rooms designed for customers to prepare and test infrastructure prior to installing it within the data halls. Rooms are available via SMC booking on a first-come first-served basis.
-
Behind the staging rooms are the three internal water storage tanks (with provision for a fourth) that together can hold up to nearly 300,000L (in addition to our 100,000L rainwater tank that sits outside).
-
There are two 2.7m-high goods lifts each with a 3500kg capacity (3.5 tonne).
-
You will notice throughout the data centre that all door heights are maintained at 2.7m thus allowing a 45RU rack on a pallet-jack easy access from the loading dock to the data hall.
Position 10 – FIRST FLOOR – NEXTDC offices and customer suites
-
Point out NEXTDC office to the right.
-
Walk towards the rear and show that we have the capacity for shared customer hot-desking, private customer suites, purpose-built NOCs or SMCs, or temporary offices for the project management of larger installations.
Position 11 – The Gas Suppression Room
-
S1 has an inert gas fire-suppression system that utilises a ‘two-knock’ combination of very early smoke detection apparatus (VESDA) and the normal smoke/thermal fire detectors. Should both these systems raise an alarm, IG55 Proinert gas (50% argon, 50% nitrogen) will be released to suppress any fire within the data hall.
-
There are 600 gas bottles here, and the room is configured for main and standby deployment.
-
That is to say, should there be a gas discharge we still have an active system while we replace the used bottles. The gas system is active in all data halls and the Interconnect Rooms.
-
Unlike pre-action sprinklers in many data centres, where water would be dispensed in the event of fire, by using IG55 the fire can be suppressed quickly by reducing the level of available oxygen in the data hall, so all equipment around the affected area can continue to operate without incident. This is a leading system for its health, safety and environmental characteristics.
-
This gas-based fire suppression system is an important design advantage in S1, because unlike sprinklers that would cause permanent water damage to all critical IT equipment to potentially save only a single device, the gas system preserves and protects all electronics in the data hall. While it’s more expensive than sprinklers, it’s an important investment that substantially reduces the risk profile for our customers.
-
Here on the monitor we can also see a view from above of our mighty diesel generators.
-
Finally, in the corner is our water mist system that is used in case one of our diesels catches fire.
-
The system will spray approximately 500 litres of water in a mist into the diesel room and extinguish any burning fuel.
Position 12 – SECOND FLOOR – Corridor outside data hall (info screen)
Directly in front of lift
-
This display screen shows the floor plan of the facility for Level 2 in real-time.
-
As you’re probably aware, S1 has four data halls in total. Data Halls 1 and 2 are on Level 2 with Data Halls 3 and 4 on Level 3.
-
The area we’re in right now is the main access corridor for Data Halls 1 and 2.
-
Each data hall provides 1450m2 so at 2kW/m2 we have almost 3MW of available IT load (per hall). Our data halls are surrounded by a 3.5m-wide service corridor. You can see the chilled water pipes through these glass tiles.
-
The core infrastructure that supports the data halls is housed on two levels: the electrical plant on the Ground Floor, and the mechanical plant (water towers, chillers, pumps and some water storage) within its own dedicated room on the roof, which will also support our solar array.
Position 13 – Service corridor to the left i.e. CRACs & PDUs
-
The service corridor provides for the associated data hall infrastructure such as precision computer room air-conditioning (CRAC) units that are fed by chilled-water pipes running underneath the raised floor of each service corridor, as well as the power distribution units (PDUs) that provide the power to the racks. The service corridors also act as an additional layer of security because they eliminate the need for personnel to be in the data hall when servicing and maintaining our critical infrastructure.
-
The service corridors house 22 CRACs per data hall; each CRAC provides a cooling capacity of 165kW for a total of approximately 3000kW per hall. The CRACs are in an N+4 configuration per data hall, ensuring we can maintain a consistent temperature of 22 degrees Celsius +/- 2 degrees during unit maintenance or failure as per our service-level agreement (SLA).
-
Cold water running through the CRACs cools the warm air from the data halls as it passes through the unit. Once cooled, the air is returned to the hall via the under-floor plenum before entering our pods through air vents in the floor.
-
Also in the service corridor are 24 power distribution units each with 144 x 32Amp circuit breakers to service customer racks. Each circuit is individually monitored, allowing us to measure the power being consumed in each rack and to report this to the customer via our ONEDC® portal, which we will demonstrate to you once we’re in the data hall next door.
Position 14 – Data Hall 1
-
All of the flooring in the data halls is raised at a height of one metre. This provides space for the power cabling. The communications cabling sits in the various cable baskets overhead – note the yellow fibre-duct above the racks.
-
The communications infrastructure that NEXTDC provides is either single-mode optical fibre (SMOF) or Cat 6 Ethernet cable (note the zone boxes for the fibre around the data hall and the Ethernet patch racks in the centre of the data hall).
-
All racks are supplied with dual power 32Amp feeds, delivered to each rack as an A and B feed. The A feed is orange and the B feed is blue. All PDUs, cable trays and power outlets under the floor are colour coded to eliminate any confusion (see glass tile).
-
To ensure efficiencies and improve our power usage effectiveness (PUE) – our target is 1.3 – we utilise ‘cold-aisle containment’. Cold air is blown under the raised floor and directed up through the vents in the front of the racks into the cold aisle, allowing us to cool up to 6kWs in a standard rack and more than that if required and technically approved. The equipment draws in the cold air through the rack door, heating it up in the process. The air exits the back of the rack warming the rest of the room. This warm air is drawn through the wall vents back into to the CRACs in the service corridor, where the cooling process begins again.
-
As mentioned, S1 has an inert gas fire-suppression system that utilises IG55 Proinert gas (50% argon, 50% nitrogen) to suppress any fire within the data hall.
-
Each data hall is a separate fire-rated compartment in itself, so if fire suppression is required in one hall, it does not trigger in or affect the other halls.
Position 15 – The Racks
-
Our standard racks at S1 are sourced from Server Racks Australia. You can see the front of the rack is blue the back door is red. This is to indicate the cold aisle (blue) and hot aisle (red).
-
All racks are 1200mm deep and 45RU high, and the standard rack is 600mm wide.
-
NEXTDC can cater for custom racks such as large SANs or mid-range infrastructure that comes with a customised proprietary rack (for example, IBM p595 or HP XP24000). As long as the rack is no more than 1200mm deep NEXTDC can integrate it within our standard containment system. We call this ‘bring your own rack’ or BYOR.
-
Now we’d like to give you a demonstration of our proprietary customer portal – ONEDC®
-
Go to iPad and screen set up.
-
ONEDC® is accessible from any internet-enabled device and gives you a remote view of your data-centre space as you can see here (show ONEDC® on iPad or iPhone).
-
ONEDC® allows you to connect with your set-up and check on your power consumption, temperature and security access logs for each rack. You can also send us support requests via the ticketing system and check back in to see the current progress whenever you like. You can even book deliveries and request access for technicians remotely, or unlock a rack or apply for new services. It’s all done at the touch of a button in ONEDC®.
-
NEXTDC-provided racks are individually secured by our customised TZ rack-locking system, which can be managed and monitored by customers through the ONEDC® customer portal. The benefit of this locking system is that the one card allows customers to access both the facility and their racks, and to maintain a log of precisely who accessed each rack and when.
-
In this example, we have two racks, as you can see. Using ONEDC® you can open any of your own racks, no matter where you are, and you can see the status change. (Demo).
-
Power consumption data is available through ONEDC® in real-time – which means you can see immediately if there is something unusual that may indicate a fault with one of your systems.
-
Racks can be purchased individually or in a block. Blocks are 10 or more contiguous racks – the benefit of a block is that you can share the total power purchased across the racks, with a maximum of 6kW in a single rack. Note that power cannot be physically shared between racks, as that is unsafe and against our Facility Rules.
-
S1 has secured a massive 11.5MW of IT load capacity to support high-density infrastructure into the future. Once customers apply for high-density racks, our engineering team needs to review the request and approve the proposed design configuration before the additional power is made available.
-
The IT ecosystem in S1 is made up of carriers, ISPs, ASPs, MSPs, IaaS and SaaS providers and more. The ecosystem provides you with many choices of provider within the facility, just by ordering one cross-connect. If at any stage you decide to change providers, it is as simple as having that cross-connect changed, which NEXTDC would manage securely within our Interconnect Room.
-
S1 has multiple cable paths from the street to customer racks. There are dual pits in Eden Park Drive and at the rear of the building where cables run directly into the interconnect rooms located on both the east and west sides of the ground floor.
-
There are multiple paths from each interconnect room to the data halls and multiple data hall entry points.
Some features of S1 you didn’t see:
-
The mechanical plant (water towers, chillers, pumps and some water storage) within its own dedicated room on the roof.
-
And also on the roof is a space provisioned for the new 300kW solar array. The 400kW array on the roof of our M1 Melbourne facility is both Australia’s largest privately owned solar system and makes M1 the first data centre the Asia-Pacific to utilise its own solar power.
Uptime Institute Tier III Certification
-
S1 has formally received the Uptime Institute’s Tier III independent certification for both design and construction. This is an outstanding addition to our technical credentials and S1 is only the seventh data centre in Australia to achieve this coveted standard.
-
Tier III certification of a data centre focuses on the facility’s capability to provide concurrent maintainability. This refers to the ability for individual plant or parts of the critical infrastructure to be shut down for maintenance or replacement without interruption to services. Tier III certification provides UTI’s endorsement that the facility’s design will support extremely high levels of service availability.