Tuesday, September 10, 2013

Technology: Infrastructure that Creates the Foundation for Growth and Agility

- Peter A. PanfilVice President Global Power at EmersonNetwork Power, says:

It is an ongoing undertaking; data centers must become more efficient, reliable and agile to support future business growth. Yet the problems of the past—low utilization, lack of visibility and inefficient equipment and processes—are holding many organizations back from achieving the required transformation.

The data center infrastructure—power and cooling systems that ensure safe and continuous operation— often represents the least agile and scalable component of the data center. If improperly designed and maintained, it can constrain growth and contribute to downtime. Conversely, the right infrastructure creates a foundation for continuous availability, increased return on capital and cost-effective growth.

Approaches for Scalability
Heat density has been a top concern identified by the Data Center Users Group (DCUG) in eight of the last nine years. Yet, according to DCUG survey data, average rack power density in the data center was about the same in 2013 as in 2006.This is the result of more efficient servers and greater virtualization and indicates an opportunity to add capacity by increasing rack density.

Perimeter cooling systems now operate at higher levels of efficiency and have built-in intelligence that allows them to communicate and collaborate. Cooling has also steadily migrated closer to the heat source, increasing efficiency and the ability to cool higher density racks. With the proper power and cooling infrastructure in place, most data centers can double or triple their capacity without increasing data center space.

If additional capacity is required, an aisle-based or container-based expansion strategy can be employed in which initial capacity is met by the required number of aisles or containers but space and power capacity are reserved for the addition of future “modules.” When capacity is needed, additional aisles or containers—with integrated cooling, monitoring and power protection and distribution—can be added, enabling an easy-to-implement modular growth strategy.

This approach has been popular with organizations that need to expand quickly to react to market demands or opportunity, such as colocation providers, or those delivering cloud services. With proper planning, significant blocks of new capacity can be added in a fraction of the time it would take to conduct a traditional build-out or build a new data center. Because they have the ability to respond quickly, these organizations can reduce upfront capital costs and increase operating efficiency by using a higher percentage of their operating capacity at startup.

Alternately, capacity can be added through cloud or colocation providers, but even in this case, infrastructure remains an important consideration. The in-house data center infrastructure has to adapt to varying loads without compromising efficiency. The infrastructure of the cloud or colocation provider should be evaluated to ensure it uses technologies and configurations proven to support high availability.

Protecting Availability
While difficult to predict exactly what will be expected from the data center of the future, it is hard to imagine a scenario in which downtime isn’t a serious issue. Data centers attempting to completely eliminate power-related downtime generally use dual-bus architecture to eliminate single points of failure across the entire power distribution system. This approach includes two or more independent UPS systems each capable of carrying the entire load with N capacity after any single failure within the electrical infrastructure. This is a proven approach for delivering Tier IV availability, but does require custom switchgear and limits power equipment utilization to 50 percent, impacting initial costs and operating costs.

Alternate configurations in recent years support high availability while increasing power equipment utilization, including distributed reserve dual-bus architecture. Static transfer switches (STS) are used to provide redundancy across multiple UPS systems as well as the transfer switch itself (Figure 1).

The reserve-catcher dual bus approach is extremely attractive to organizations seeking dual-bus availability with lower initial costs and greater efficiency. Like the distributed-reverse dual bus, it uses STS as the power tie; however, instead of creating redundancy through distributed primary UPS systems, it uses a secondary or reserve system to provide dual-bus protection across multiple primary UPS systems. The result is lower initial costs than other dual bus approaches with increased power equipment utilization. Less critical facilities may consider a parallel redundant configuration, such as the N + 1 architecture—in which “N” is the number of UPS units required to support the load and “+1” is an additional unit for redundancy. This architecture provides cost-effective scalability.

Advancing Efficiency
For organizations seeking to optimize data center efficiency, technology strategies that comprise a holistic approach to improving data center energy efficiency with a comprehensive, vendor-neutral approach to achieving meaningful reductions in data center energy costs is needed.

Power and cooling systems can be configured to increase efficiency. Double conversion UPS topologies deliver better protection than other types of UPS because they completely isolate sensitive electronics from the incoming power source, remove a wider range of disturbances and provide a seamless transition to backup power sources. With the combination of improved operating efficiencies and the smart use of an active inverter eco-mode that provides reliable power protection and enhanced flexibility, these robust power systems can achieve efficiencies in excess of 98 percent. In addition, intelligent paralleling can improve the efficiency of redundant UPS systems in a parallel configuration by deactivating UPS modules that are not required to support the load and taking advantage of the inherent efficiency improvement available at higher loads. Additional efficiencies can also be achieved in the distribution system by bringing higher voltage power closer to the point of use, minimizing step down losses.
Conclusion:

No single technology can remove all of the constraints that prevent organizations from optimizing data center performance and efficiency. But when technology is addressed through a systematic review and evolution of data center management technologies each of those constraints can be effectively overcome.

Monday, September 9, 2013

Bringing Big Data to the Enterprise

Jack Norris, Vice President, Marketing, MapR Technologies, says:

The demand for Big Data analytics is challenging traditional data storage and processing methods.  Enterprises are capturing large amounts of data from many new sources, like the web or social media, and now need a new tool to rapidly process and analyze the mix of structured and unstructured data.  Traditional data warehousing technology was not designed to deal with this kind of volume or blend of structured and unstructured data.   Instead, organizations are tapping into Hadoop's power to provide the rapid processing and analytics to unlock the informational value of their data.

Hadoop applications are increasingly moving out of the lab and into production environments.  The shift coincides with a push for Big Data analysis to move from batch oriented historical analytics, to processes that deliver more immediate results.  Enterprises are looking to tap into their data as quickly as they capture it and use it to add more value to their products and services.  As organizations look at leveraging Hadoop for Big Data analytics, enterprise-level requirements are increasingly important to many organizations.

Enterprise readiness constitutes critical elements such as high availability, security, data protection and also multi-tenancy capabilities that ensure the ability to effectively share a cluster across disparate users and solutions. 

High Availability
When evaluating a Hadoop solution, understanding where a cluster is vulnerable for failure and what can cause downtime is essential.  There are alternatives that offer automated failover and the ability to protect against multiple and simultaneous failures.

Security
With multiple users trying to access and change data on the cluster, there is a need to ensure proper authentication, authorization as well as isolation at the file system and job-execution layers. Not all distributions provide these capabilities.

Data Protection
Using point-in-time snapshots to restart a process from a point before a data error or corruption is something enterprise storage administrators have come to expect, however, snapshots are not available in all Hadoop distribution offerings.  This is also true for mirroring.  In order for Hadoop to be included in an enterprise's disaster recovery plan remote mirroring must be supported.

Multi-tenancy
Multi-tenancy support so that different users and groups can effectively share a cluster is a required feature for most large deployments. Sharing of the cluster allows for efficient utilization of the storage and computational power of Hadoop. This requires features such as segregated user environments, quotas and data placement control which is available only with some Hadoop distributions. 

Meeting all of these enterprise level requirements while constantly upgrading features with new analytical capabilities will ensure Hadoop's strong position in the Big Data industry. The newly emerging data analytics processes driven by Hadoop will push well beyond what is currently possible with Big Data.  Hadoop has emerged as a standard platform and soon will be found in all the well-established, production data centers of the Fortune 1000.

About the Author
Jack Norris, vice president, marketing, MapR Technologies
Jack has over 20 years of enterprise software marketing experience. As VP of Marketing at MapR Technologies, Jack leads worldwide marketing for the industry's most advanced distribution for Hadoop. Jack's experience ranges from defining new markets for small companies, leading marketing and business development for an early-stage cloud storage software provider, to increasing sales of new products for large public companies. Jack has also held senior executive roles with Brio Technology, SQRIBE, EMC, Rainfinity, and Bain and Company.


Friday, September 6, 2013

How to Choose the Optimal Base Station Antennas

- Bo Jonsson, Senior Radio Frequency Expert, CellMax Technologies and Torbjörn Kämpe, CEO, CellMax Technologies, say:


The choice of antennas for base stations rarely receives any attention. Antennas are regarded as a cheap commodity that will “do its job” regardless of which antenna you choose. Nothing could be further from the truth. Antennas have a tremendous impact on coverage, performance, capacity and efficiency, and choosing the right ones can make or break a mobile operator’s ability to cope with the rapidly increasing demand for data.


Looking at the installed base, one could get the impression that the obvious, or even optimal, choice for almost all sites would be the traditional 18 dBi antenna. This antenna has 65°of horizontal beamwidth andaround 6.5° of vertical beam width, as do about 80 percent of all installations on 1,700 to 2,100 MHz. The 15 dBi antenna is still quite common, especially on the lower frequencies, with a vertical beam width of around 14°.




There are also high gain 21 dBi antennas and new so-called ultra high-efficiency antennas usingair as dielectric andvirtually eliminating power losses. These antennas improve base stations’ transmission capacity, resulting in higher signal strength, an increase in geographical area coverage, improved indoor penetration, increased traffic, improved data throughput and reduced production costs per call. One newly launched multiband antenna (by Swedish manufacturer CellMax Technologies) offers21 dBi radiated power on the high band with 4° of vertical beam.


18 dBi antenna is chosen for historical reasons, not with forethought

How much forethought is behind the choice of 18 dBi antennas? The decision to use this antenna comes down to historical reasons. The 18 dBi antenna was the highest gain available from antenna vendors able to deliver antennas in mass quantities, and was such an obvious choice that it became almost a de facto standard and installed almost without a second thought. This was of course not without reasons; it was a good compromise and has served us all very well.

But the 18 dBi antenna does not deliver the highest gain anymore – high gain ultra high-efficiency antennas do. The 18 dBi antenna was also always questioned both for its narrow vertical beam width and also its horizontal beam width of “only” 65°.


With data surpassing voice in new 3G and 4G mobile networks, interference is different and so must the antennas be to stay effective. Most sectors would benefit significantly from an antenna with higher gain and a sharper upper roll-off curve than the standard 18 dBi can offer.


The choice of antenna depends on the individual situation

So what is the optimum antenna for all installations? The answer to this question is: “there is no such thing as an optimum antenna for all installations!” It always depends on the situation. If there is a tall building nearby, a small antenna with lower gain like 15 dBi would do a good job. At a rural site with maximum coverage as the most important objective, a 21 dBi antenna is clearly the best option.


At suburban sites the focus shifts towards capacity, suppression of interference and less on coverage. In can be seen that a high gain antenna will provide a 3.5 dB stronger signal level in the center of the cell, and suppress interference from next cell better than the 18 dBi antenna. The 3.5 dB extra will improve in-building coverage and data transfer speed. Also, the lower interference will improve C/I and further improve performance. A high carrier-to-interference ratio (C/I) is the key parameter for efficiency, data rate and general success.So a high gain high efficiency antenna is again the best choice.


What about dense urban sites? After all, that is where most antennas are installed and the requirement is here to contain the signal within the cell while at the same time provide a very strong signal for good in-building penetration and high C/I for high speed data transfer.

Simulations and real-life tests show that there is a very sharp “cut off” at 650 meters from the high gain antenna at 6° tilt. This will provide a significant reduction of interference and a magnificent control of soft handover load. There is a remarkable difference between 5.5° and 6° which proves that antenna tilt is crucial to performance, and that antenna tilt setting is a precision job where a scale on a tilt bracket is far from accurate enough. The high gain antenna still outperforms the 18 dBi antenna in dense urban sites and the sharper upper roll off is the main asset, actually even more important than the stronger signal level.


Going further in tilt it can be seen that the high gain antenna provides a stunning 5.5 dB stronger signal and even more in interference rejection from outside 450 meters. That does wonders for soft handover overhead and data transfer speed. So again a clear win for the 21 dBi high gain antenna with very sharp upper roll off.


High gain, high efficiency the way to go

So in summary and if we generalize a bit, we can see that the 21 dBi antenna is a better choice in surprisingly many situations, for much the same reason as the 18 dBi antenna once replaced earlier antennas with lower gain. They very often offer significant improvements to the traditional 18 dBi antennas. But they have a significantly more complex feeding network so the efficiency becomes a very important parameter. After all, to extend and improve coverage requires more radiated radio frequency power, not less. So look not only for a high gain antenna, but also for a highefficiency antenna!