Today, database administrators are under pressure to
speed transaction response time as well as reduce latency – but they’re under
pressure to do it without breaking the bank.
If you are in this situation, one option is to look
at using hybrid arrays, which can often deliver similar performance to
all-solid-state arrays, but at a fraction of the cost when measured on a dollar
per GB or TB basis.
Here’s why:
The requirements of thin provisioning, deduplication
and compression are becoming standard check-box items in the enterprise storage
system market. But the impact of these features on the performance and ultimate
cost of an array is dramatic. While it sounds counter-intuitive, deduplication
and compression can actually increase performance. When these data reduction tools are
implemented in front of the storage system’s cache resources, applications’
data reduction factors become multipliers to how much data can reside in cache,
thus boosting cache hit ratios and ultimately increasing performance. Other operations, such as a storage system’s metadata
processing, RAID operations, and snapshot mapping are also a major performance
factor. Implementing a system that minimizes the impact of these overhead
functions can produce impressive results. The end result is the ability to optimize both
performance and capacity, whereas these have traditionally been either-or
tradeoffs.
For certain data sets, such as database indexes and logs
or even entire databases, the ability to ‘pin’ them into a high performance SSD
pool is an important capability. This function allows the DBA to dictate which
data sets should be held in SSD; eliminating the risk of a performance hit from
accessing data on rotating media.
Why are these points important?
IT organizations are implementing server
virtualization, desktop virtualization, shared databases and file services --
all of which not only demand high capacity storage, but also high performance
storage that can handle multiple concurrent applications. Advances in data
reliability, protection and efficiency, such as RAID, compression and
de-duplication, also demand better performance. As a result, database
administrators need to rebalance the key requirements (capacity, performance,
compatibility, usability (fit for purpose), reliability, data protection and
value for money) for storage in order to maintain an efficient IT
infrastructure that supports critical business operations and applications.
Look for a system that combines a Redirect on Write
(ROW) file system with metadata acceleration technology. This combination is
really the holy grail of storage – bringing high performance, high capacity and
high reliability together at a low cost. This architectural approach coupled with
best in class data protection features, results in the most reliable infrastructure
at a price that doesn't break the budget.
In the advent of this lies Big Data. I can see more and more Hadoop professional engineering being in demand for the future. When I was starting, sql server support page was my best friend. Through accumulated knowledge from experience, I had gotten better in response time.
ReplyDelete