PT. Sinergy Mediatek Informasi

data centre technologies 

How to build data centre for the future?

Data Center Infrastructure ia a Facilities for Corporate or Government who use Information Technologies in Large Scale and have 7x24 Hours Standard Operation.  Data Center very Important and need standard to build and Operation. 

Our Company have many experiance to design, build and maintenan the Data Center in All Indonesia

Green Data Centre

Our Company always concern of Efisiensy and everthing technology to make Earth Better, thats way we are very concert about "Green Data Center"

The development of the Data Center currently follows the concept of a Green Data Center or Eco Data Center, where the Data Center uses the least amount of Energy with the least possible impact on the environment.

Current technology developments such as the increase in Server with the concept of Hybrid Converged Infrastructure (HCI), where the use of Power is very economical and automatically reduces the need for Cooling.

In addition to cooling, the need for electrical energy has also undergone a change in the concept of "Green Data Center", where the use of Electrical Energy which is estimated for the next few years, currently uses the concept of "Modularity" so that current needs are calculated for now and will be adjusted according to the development of the Data Center.tas congue. Vitae ultricie  

Data Centre Contractor

With 20 years of Experiance to build Data Centres in Indonesia , we have many human resources with high skill to build Data Centres such as : Civil Works, Mechanical and Electrical , Fire Suppression , Uninteruptible Power System (UPS), Critical Air Condition (Crack/PCU) , Security System and Supporting function for The Data Centres. 

We can handle Tier 1-4 Type of Data Centre and get Compliance with Uptime Institute, TIA for Standard Certification for Data Center.


Insight The Building Data Centre

  • The design must allow for people to be successful within the environment. You cannot create an unreliable infrastructure and place the burden of expectations of reliability on your IT staff to manage it.
  • The design should be as safe and straightforward as possible.
  • The design should be generally fault-tolerant
  • The design should be able to scale reasonably

This guide will provide you with a singular reference architecture that I recommend for all corporate data centers to be successful. The recommendations come from many years of experience designing and operating data centers as an electrical engineer. For most reading this guide, we hope you provide this reference architecture to a consulting engineer to make sure that you are building to respect code and life safety so that they can follow the design and give you a great working solution.

Cost of Building  Data Centre

There are many different features to consider when it comes to choosing your data center options, and deciding on the best data center for your company’s specific needs. Take a look at some of the key factors to bear in mind. 

Your Physical Footprint
Think about the physical footprint you’ll need. Once you know this, you’ll be able to get a much clearer idea of the costs involved in your colocation data center. If you’re unsure of your power requirements, this is a good place to begin. Look at your current racks, and think about what’s coming. High density loads such as blade chassis and other devices may put you into the “Heavily Loaded Cabinet” or “Moderately Loaded Cabinet” brackets, which will affect cost. 

Your Network Requirements 
There is a considerable degree of variation in the costs of different network providers. Costs depend on efficiency and individual business requirements. When you look into the costs of network providers, remember that economies of scale can have a dramatic effect on pricing. Heavy network users will therefore usually be able to access more cost effective options. For the majority of businesses, a few hundred Mbps is sufficient. This is available at around the $250/mo price point, with the option to expand as a company grows.

Service Levels
 The level of service that you opt for will inevitably affect pricing, so this is another thing you’ll want to bear in mind. Many data centers offer a completely hands-off approach. While this will reduce costs, it does mean a heavier workload for your own team. If you’d like the support of service staff, with a fully managed service and options such as remote hands available, you will likely be looking at a slightly higher price point. However, the experience offered will be far superior, and your team will have more time to focus on other tasks. 

Colocation Cost Components
 Some data centers come with hidden costs, so keep an eye out for these. Typically these costs might include cross connect fees, additional packages such as remote hands and extra network costs that you may not have considered. When added together, these can really bump up the price of the data center. For this reason, fully managed data centers can often prove more cost-effective in the long run.

Steps, Considerations & Specifications for Building a Data Center

1. Electrical layout and general descriptions

We prefer dedicated and redundant FR3 fluid-filled transformers for primary utility entrances.

Instead of separate transformers where this is not possible, the design may accept two different feeds from the same “utility building source” as the primary utility load. Regardless of feeds from the same “utility building source”, – the central entry point to the data center shall be considered the switchgear for each leg of the 2N infrastructure. An inbound TVSS (Transient Voltage Surge Suppression System) must be on the line side of the ATS for both the utility and the generator. 

The Uptime Institute does not consider utility diversity for tier ratings, nor even the presence of a utility. We would prefer to focus on building emergency backup generation redundancy at a point of demarcation that we control (the entrance to the DC and switchgear), rather than worrying about the more extensive utility system in which corporate data centers often have no control over. The vital key here is that, regardless of where we feed from for the 2N system, we have two entirely separate switchgear lineups with two different backup power generation systems (more on that soon).

The switchgear is preferred to be a two breaker pair but may accept an “ATS” with transition switch. 



The transient voltage surge suppression system should have an indicator light to show the health of the device. Each leg of the 2N infrastructure must be its independent line up of switchgear, distribution, and UPS.

Sometimes we find data centers that have generators backing up the entire building; This may count as ONE generator source assuming it’s adequately designed. A UPS + distribution should feed below that to complete one leg of the 2N infrastructure. To complete the design, an entirely separate switchgear, generator, and UPS lineup should provide two truly diversely backed up cords to each rack. That entirely different system may accept the building source for its “utility source.” Still, we can’t count the same generator system twice, so we must provide a backup generator for the redundant feed.

In the event of a small data center, the redundant switchgear and all critical systems may exist in the same room with the data center. Otherwise, you will want to separate, physically isolated places. In either instance, paths to and from gear, including routes out to backup generation, shall be physically diverse and isolated and will not carry down any single corridor. The physical separation between redundant feeds should include at a minimum 1 hour rated firewall.

In general, we prefer to look for physical, electrical, and logical isolation between redundant components; no paralleling, no N+1 common bus, no main-tie-main, etc.
 

Based on technology, the ups are grouped into two (2) namely Modular UPS and Non Modular UPS

Non Modular UPS is a UPS that has been used for a long time where the Power Module is integrated with the cabinet ups as well as the batt. Usually, the use of Non-Modular UPS must estimate the need for the development of the Data Center in the next 5-10 years and this power must be available at the beginning when the Data Center constructionRegular UPS maintenance is very important to do, ensuring the UPS is running properly and will function normally in the event of a power outage or when switching from Mains to UPS or from Generator to UPS.UPS maintenance is carried out periodically to check the condition of the existing UPS, especially in charging the Battery and Output to back up and other UPS components.

    Modular UPS is the latest technology which uses a power module and battery which can be removed easily while the UPS is  running or in           maintenace Maintenance mode and if there is an additional need for power, so long as there is a slot in the rack ups, additional power can be added to the UPS.

    With Modular UPS , we can reduce cost Total Cost of Ownership and make customer happy for long time use .

    In Maintenance mode and if there is an additional need for power, so long as there is a slot in the rack ups, additional power can be added to the UPS.

    UPS accompanied by the addition of a Battery Module.
    With Modular UPS , we can reduce cost Total Cost of Ownership and make customer happy for long time use .



Maintenance UPS 

Maintenance UPS Regular UPS maintenance is very important to do, ensuring the UPS is running properly and will function normally in the event of a power outage or when switching from Mains to UPS or from Generator to UPS.UPS maintenance is carried out periodically to check the condition of the existing UPS, especially in charging the Battery and Output to back up and other UPS components.SMI  have a Services Divisiaon who have many experiance . Has done a lot of maintenance for UPS from various brands from small to large scale, and when doing maintenance we also always pay attention to the continuity of the running of the Data Center by providing backups when needed.

Sample UPS 

UPS Onsite Data Centre 

Single Size UPS 
Non Modular

Sample Of Non Modular UPS 

Modular UPS 

Sample of Modular UPS 

Critical Air Room Conditions (CRAC)

Cooling is very important for data centers, good cooling control and can make the equipment installed in the Data Centers run well too.
Every equipment installed in the data center generates heat and if the heat cannot be controlled properly, it will cause serious damage and will even make the data centers unable to run.
The use of Crac is determined by several things:

  1. Based on the Level of the Data Center: Tier 1, Tier 2, Tier 3 and Tier 42. 
  2. Based on the concept of Cooling that will be used

Concept with Raised Floor 

Raised Floor design have been installed in many datacenter, still a large number a customer are building raised floor computer rooms. Raised Floor allow for flexible cooling arrangements and have Limited Cooling Capacity. Cooling with with Raised Floor relatively costly to install and maintain. one important handicap in Cooling with Raised Floor is "Floor Loading Restrictions"

Raised Floor Design is on the primciple of an under floor cold air distribution path, whereby the hot air is flowing back to the air conditioner unit either via the room or via a dedicated duct or suspended ceiling Void.

When Setup Data Center with Raised Floor choice the correct set up depend goal of data center as follow:


  1. Class Room Setup, is a traditional way of computer room setup, Leads to in-efficiency due to mixing of hot and Cold air and Not Recommended for      Data Center now.
  2. Hot and Cold Aisle Set Up
  3. Hot and Cold Aisle Set Up with Suspended Ceiling
  4. 4. Placement Equipment of Racks5. 
  5. Avoid leakage and shor circiuit of water6. 
  6. Temperature and Air Volume in Cubic Feet per Minute (CFM) and Cubic Meter per Hour (CMH), Provided            buy the raised floor should match the CF M/CMH reaquirement sof the rack. 
  7. Perforated Tile and Equipment Placement

CRAC Concept Without Raised Floor 

Non raised Floor principle typically only works well where the ICT equipments is using the fornt to rear cooling air flow , Rack are place directly on the slab . 

With Non Raised Floor customer have several benefits as follow : No Cost for raised floor structures, No Cleaning required under the floor , All cabling will be run overhead . When Using”Non Raise Floor” Slab must be treated proper paint to avoid contamination .

  1. Option in Non Raise Floor Cooling is “In Row” and “Over Duct” .
  2. In Row Cooling can be deployed when using a non raised floor setup ,   cooling close to the heat load leads to good efficiencies for airflow, Less Racks per Sqmts/Sqft inside the computer room.
  3. Overhead Duct ,
    Overhead , Ducted cooling dumps cold air directly in front of the racks and extract the hot air from the back.
  4. Duct have often louvers/vents to regulate CFM/CMH.
  5. Duct must be well Designed and ensure enough air volume can be dumped and extracted at the rights locations
  6. Air Conditioner redundancy must be taked into account .
  7. Do Not Pait the ducts as paint might splinter of over time cousing particiulated to contaminate the room .
  8. Inspec and clean ducts on a regular basis

Sample of Crac

Fire Protection and Safety 

Most fires in Data Centres originate from electrical sources such as : Equipment (Overheating, Zinc Whinskers or Dead Shorts) , Electrical Distribution (Wiring, Loose Coonections, Sparks) and Light Fixtures 

Another Contributing   is a Bad Connection , Over Loading and Dust . During various data centre audits conducted it is proven that high percentage of data centre's haver (Potential) issues with their fire protection . 

Fire Suppression Requirement 

  1. Detect as Early as Possible.
  2. Safe for Humans (as much as possible) 
  3. Environmentally friendly 
  4. Effective for fires in the data centre and Supporting Facilities
  5. Do not or minimize, Damage to sensitive equipment 
  6. Comply with national and Building Code


Standards for FIRE SUPPRESSION ON DATA CENTRE

  1. NFPA 75 
  2. NFPA 2001 /ISO 14520
  3. Local Codes
  4. Standards Typically describe 
    1. Safety Measures 
    2. Gas/Flooding and allow Exposure Levels
    3. Cardio-Toxicity and allowable exposure Levels
      1. Not Obseerved Adverse Effect Level (NOAL) - Highest             concentration of agent at which no "Marked" or adverse effect occured 
      2. Lowest Obsederved Adverse Effect  Level (LOAEL) - Lowest Concentration at which and adverse effect levels measured

Detection System 

Data Center have Detection System  Called : 

  1. VESDA (Very Early Smoke Detection Apparaturs) or HSSD (Highly  Sensitive Smoke Detection 
    1. Works Via Air Sampling 
    2. As Much as 1000 More sensitive than standard smoke detectors
    3. Care must be taken , especially during building works 
  2. Smoke Detectors for Fire Panels 
    1. Ionization Detections ( Uses low radiations (Harmless) 
    2. Photoelectric Detectors 


FIRE SUPPRESSION SYSTEM BEST PRACTISE 

  1. Install VESDA /HSSD Type of System 
  2. Use any of the gas based systems as primary fire suppression system 
  3. Use Pre Action sprinkler as Secondary Syste, 
  4. Ensure that the room is properly sealed 
  5. Ensure That Gas Content is enough to achieve concetration levels required
  6. Cread Extration vents
  7. Proper maintenance 

Gas for Fire Suppression  

Fire  suppression in Data Center have many type of gas , we must be carreful to choice it , couse some gas not allowed to used in Many country and not safe for human .  In Indonesia many Data Centre use FM 200, Noved . Inergen and Argonite .  Let me 



Halon 1301

Production Clear after Jan 1 1994 and In All europe  all system based on Hallon to be de- commissioned before Dec 31 2003 . Have problem with the Ozone layer 
Banned in most Countries

learn more

CO2

CO2 one of the lowest prioce clean Agent , Veryy Effective for fire suppression , Lethal at total flooding concetrations (34%), have severe health problems at lower concetrations.
Cost Effective installation and Maintenanfce

Not Allowed to be used in most Countries in occupaid areas 

learn more

FM 200

Widely Used in Many Data Center in the World
Not Harmfull to humans in itself 
Gas not Clear during discharge 
Gas container should be reasonably close to data centre 
Leaves no residu 
Some country alreadydo not allowed or restricted the usage of fm 200 such as denmakr, iceland and more 

learn more

Scalable Network Infrastructure 

Network Cabliung is the foundation to support a high availability data centre , IT Equipment and its application . Proven Product and contractors are crucial for a proper design , Installation works and maintenance of a cabling infrstructure .  It will reduce downtime and Improve for Operational Efficiency , Manageability , Reliability and Availibilty .  With good structured Cabling System can be : Reduce risk of Down Time, Easy for Re Patching, Easy for Fault Finding , Better Cooling , and Standarized  Lenght . 

We can manage Data Center with Poor Cabling .Make Sure your data center have good design for Structured Data Center. 

Cooling for data center will get big problem when Data Cable closed all air flow for Cooling,  We must have design and planning for 5-10 Years in the future . 

Standarization 

Standard for Structured Cabling using TIA/EIA 568 for Cooper Wire and Fiber Optic Cable .  Cooper Wire charateristic for Data Center as follow : Unshielded/Shielded , Solid Calbes with makximunm lenght 90 M , Flexible/Strainded Cables (Patch Cords) for Patch Panels/Short Distances with Maximum Lenght < 10 Meters and Total Lenght is 100 M Including Solid and Flexible. 

Cooper Cable 

Cooper Cable Commonly use catergory 6 or Higher  for better connection with Shielded or Unshielded . Cooper Termination will use structured Cabling  Patch Panels with Flat or Angled Panels . Patch Cable should always be the same or class higher than structured Cable . Match Shielded or Unshielded Twisted Pair . Patch Cable must be use Pre Fabrication or No on Site Termination .  Every Node termination mus be test with Proven Cable Tester . 

FIBER OPTIC

Light is emitted as pulses at the source (Laser/LED) , Light transport through the Fiber bouncing off the cladding allowing it to travel theoretically "Endlessly" . Receiver the thight pulses and converted it to Data .  Characteristic of Fiber as follow : Distance  longer than cooper , Not Prone to EMF , Lightweight and smaller than cooper .  Fiber Comes with Various size and spesifcations such as :L 62.5/125 Um and 50/125 Um (Multi Mode cable)  and 8.3/125 U, (Single Mode).  

Fiber Termination on Patch Panel used Fiber Patch Chord : SC 9Replaced by LC or Mini LC) , MPO Now used for 40 Gbps and Higher speed Links, Evbery Fiber Connection must be handle with  care and ensure connectors  are always cleanned before termination . 


For Cerritfication and Best Perfomance all Data Center must Follow TIA 942  Network Cable  Logical Architecture for Installation and get Best Perfomance of Cable Installation . 

PT. SMI is a leader in Structured Cabling Contractor , we work with many famous Brand in The World such as : Comscope, NetConnect, Panduit, Belden , LS Cable , Nexan, Draka and many more. 

We have many site of project refference from small to large Structured Cabling Project . 


Containment Data Center

Hot and cold air containment systems designed to maximize cooling predictability, capacity, and efficiency at the rack, row or room level.
The EcoAisle is an intelligent thermal containment solution designed to increase cooling system efficiency while protecting critical IT equipment and personnel. The Active Flow Control (AFC) available in the EcoAisle provides the intelligence to communicate to the cooling system in order to right size the cooling airflow to match the load of the IT equipment providing a reduction in fan energy over conventional cooling systems. The Eco Aisle system adapts to varying rack heights, aisle widths, and rack depths to support either hot or cold aisle containment. The Air Return System of the EcoAisle provides a centralized hot air return path for room based CRAC/CRAH units or external air handling systems. The optional UL723s listed Fire Safe System uses temperature sensors to drop ceiling panels in the event of a fire. The Fire Safe System can also be used in conjunction with a field supplied smoke detector to activate in the event smoke is detected with the aisle. EcoAisle provides a safe and efficient environment for IT equipment and personnel incorporating additional features for high efficiency LED lighting with motion detection. EcoAisle can be deployed in zones or modules to gain the benefit at either the rack or row level.

The data center is fraught with power and cooling challenges. For every 50 kW of power the data center feeds to an aisle, the same facilities typically apply 100-150 kW of cooling to maintain desirable equipment inlet temperatures. Most legacy data centers waste more than 60% of that cooling energy in the form of bypass air.

Data centers need more effective airflow manage¬ment solutions as equipment power densities increase in the racks. Five years ago the average rack power density was one to two kW per rack. Today, the average power density is four to eight kW per rack and some data centers that run high density applications are averaging 10 to 20 kW per rack.

The cost of electricity is rising in line with increasing densities. “The cost of electricity is about US$0.12/kWh for large users. The forecast is for a greater than 15-percent rise in cost per year over the next five years,” 


Containment makes existing cooling and power infrastructure more effective. Using containment, the data center makes increasingly efficient use of the same or less cooling, reducing the cooling portion of the total energy bill. Data centers can even power down some CRAC units, saving utility and maintenance costs. 

Containment makes running racks at high densities more affordable so that data centers can add new IT equipment such as blade servers. Data enter containment brings the power consumption to cooling ratio down to a nearly 1 to 1 match in kW consumed. It can save a data center approximately 30-percent of its annual utility bill (lower OpEx) without additional CapEx.

Containment Benefits
Vendors design containment solutions for fast, easy deployment and scalability for data center growth. Data center containment enables the creation of a high-capacity data center in a very short period of time (hours).

Containment enables IT professionals to build out in¬frastructure, data processing, and cooling loads in small, controlled building blocks as demand grows. This is more affordable than building the data center infrastructure to handle the maximum cooling and data processing load from day one, which is the traditional method. Contain¬ment increases its cost ef¬fectiveness as rack densities increase.

Data centers typically have more cooling capacity than the load requires. Still, this capacity does not cool equipment adequately. By raising the delta T, containment avoids the capital expense of adding more mechanical cooling. As you operate cooling under higher return temperatures, cooling becomes more efficient.

The smaller the percentage of total energy the data center uses to feed cooling, the greater the percentage of total energy it uses to feed IT equipment. This results in a lower PUE, which should be closer to a 1:1 ratio.

Standardization is an operational benefit of containment. Vendors engineer containment into building blocks so that as the data center grows, he enterprise simply adds more uniform pods. Containment reliability and integrity derive from design redundancy that mitigates the downside risk of cooling system failures.

Containment aligns with the enterprise by offering a low TCO including low and progressive acquisition costs, quick time to deploy, and lower operational and maintenance costs. Maintenance costs grow only as the data center adds containment pods.


Raised Floor 

what is the raised access floor?

The raised access floor, also called “floating floor” or “false floor”, is a system created to meet the technological needs of technical rooms and allow easy accessibility and maintenance of infrastructure cabling system on Data Center and Good For Cooling Airflow . 

 A raised floor in a data center is an elevated floor that is built two inches to four feet above a concrete floor. It creates a space that can be used for cooling, electrical, and mechanical services. In data centers, raised floors are also used as a way of distributing cold air. By using a raised floor, facilities not only reduce the amount of air needed to cool equipment, they also require less energy and improve temperature distribution across all of the cabinets. According to research on the impact of raised floors on thermal behavior in commercial buildings, the presence of a raised floor can potentially reduce the cooling load by as much as 40 percent. Combining this system with an AI cooling solution could deliver even greater savings.

Keeping Cool

Servers in data centers generate a huge amount of heat, presenting a major problem for data center designers and managers alike. When servers overheat, a common reaction is to consider getting extra cooling capacity, which is based on the assumption that the existing cooling infrastructure isn’t capable of maintaining a proper temperature. In reality, the problem may not be the result of insufficient capacity, but rather poor airflow management.

In order to keep the data center cool, a common practice is to install perforated raised floor tiles within cold aisles. These perforated tiles typically are not installed in hot aisles, unless there is a maintenance tile in place. These maintenance tiles give employees access in a warmer environment, so they can work in comfort. However, maintenance tiles should not remain in place permanently as they restrict air flow.

Sometimes grates are used as a quick fix for hot spots in a data center. However, since a grate can allow up to three times more air than the perforated raised floor tile, using them will exacerbate the issue. Managing the placement of raised floor tiles is critical. If not enough tiles are installed, the air can begin to recirculate. If too many tiles are installed, it can allow air bypass. If a choice must be made between recirculation and bypass, then bypass is preferable.

Cabling and Additional Equipment

Having a raised floor in a data center also makes it easier to do equipment upgrades or install completely new equipment. This can include the installation of cabling and redeveloping the premises for other purposes. A raised floor is a good design strategy when there is a large amount of data center cabling to run. This is more efficient and can cost less than systems that are mounted near the ceiling. It can also help with the number of hidden cables and consolidation of physical ports and power plugs.

Running data center cabling under the raised floor tiles also helps to keep the data floor uncluttered and neat. Without overhead wiring systems in place, there’s nothing to block light fixtures and data center technicians don’t need a ladder to access cabling. Making a change to data center cabling is a simple matter of identifying the correct floor panel and removing it rather than accessing overhead trays that are located close to servers, light fixtures, and sprinkler systems.

Flexible Design

When setting up an initial design for a raised floor, data center engineers should consider the facility’s future development needs. This makes it easier to factor in the amount of free space needed to install both current and future equipment. The space beneath the raised floor tiles should be designed to allow cool air to circulate efficiently. Once a floor is installed, it’s critical for data center personnel to perform regular maintenance on the area, which includes taking special care to make sure it stays clean.

Since cold air can be channeled under the floor, a data center with a raised floor offers more versatility in terms of equipment deployment than a slab-based design. Rather than bolting the cabinets to the slab and directing cooling from above, raised floor tiles are more modular, allowing the facility to relocate equipment without the need to install new cooling infrastructure overheat.

Raised Floor Tile Maintenance

Cleaning underneath raised floor tiles helps keep out pollutants that could potentially pose a hazard to operations. Dust can get underneath the raised floor tiles and flow into equipment. The good news is that most data centers adhere to a regular policy of cleaning underneath the raised floors. This ensures the space created beneath the raised floor tiles is clean and free of contaminants, reducing the amount of dirty air getting pushed into the servers, which can increase the risk of equipment failure.

Cabling layout is very important in a facility with a raised floor. Just because the cabling will be out of sight doesn’t mean it can be out of mind as well. If too many cables are piled up in any area, they could significantly restrict or even block airflow, preventing some equipment from getting the cooling resources it needs. Data center managers need to carefully monitor how cables are arranged, especially when new lengths are being laid down or existing cabling needs to be replaced.

Raised floors may be one of the oldest design standards found in data centers, but they remain a popular strategy for managing cooling needs and cable deployment. By maximizing the potential of raised floors, data center managers can ensure that their facilities will remain efficient and effective for many years to come.