- Frank Huerta, CEO of TransLattice, says:
Today,
film studios and production companies are often spread out geographically with
collaborative teams working on tiny pieces of a creative puzzle from locations
all over the world. Imagine you are an
animator in Australia and your role is to design the characters for an upcoming
film. Likewise, your counterpart in
Brazil is responsible for creating the virtual environments where the
characters live, while other artists and engineers are working on similar, yet
distinct, parts of the overall project.
Amidst all of these team members, the director is in Los Angeles
anxiously awaiting updates and final versions for approval by the executive
team.
On
a whim, the director decides the main character’s shirt should be yellow and
not black. Now what happens? As you are busy making the necessary changes
to ensure the character is wearing yellow, other artists may still be making
edits on the original black-shirted character, potentially resulting in hours
of lost production time. So how do you
synchronize petabytes of data being created on a daily basis?
Unarguably
we live in a data-centric world and within the context of the example above,
data synchronization problems can make or break a project. Ensuring all parties working on a project
have access to the most current files, anywhere in the world, is extremely
important.
In multimedia and production environments managing
metadata – or the data about data – has become a daunting task. These content
developers are generating terabytes of data throughout the lifespan of any
particular project, all of which has attached metadata. As multimedia demands
increase and film continues to shift to a digital medium, many organizations
are looking at the cloud as a new means of storing data. Because of this shift, securing, managing and
providing immediate access to data stored in the cloud has become an important
area of concern.
Data – A Growing
Monster
Transparency
Market Research recently reported that the global big data market is estimated
to reach $48.3 billion by 2018. This is
a staggering increase from the reported worth of $6.3 billion in 2012. Clearly this means that there is a lot of
data floating around that must be managed, stored, secured and made readily
available. This exponential growth can
mainly be attributed to the vast amount of data generated by images, videos,
games and streaming music and movies.
Content
developers have been dealing with this data explosion for years and have used
metadata to help manage their data. Metadata,
automatically created in the background by software programs, stores important
descriptors for larger sets of data, including size, encryption keys, path and
file name, a crucial aspect needed by the software to manage data as it’s
created. Without the metadata associated with photos, music and video files, it
would be very difficult to properly manage content creation. The digital media industry is primed for a
solution that simply and effectively manages the growing mass of unstructured
data, while simultaneously providing access when and where it’s needed.
It’s
important, now more than ever, that content creators pay attention to how and
where this data is stored, accessed and managed.
Databases Reborn
Content
providers are looking for new ways to manage metadata so that all involved
parties such as, editors, producers, engineers and developers – who span
multiple geographic locations – can access, update and store data that is
synched at all times.
One
way content providers can address these challenges is by storing metadata in a
geographically distributed relational database management system. In doing so, content developers have access
to local data that is managed by a single database, thereby resolving
synchronization and data accessibility issues.
Scale Horizontally, Not
Vertically
Decentralizing data is a great option for content
providers looking to manage data performance that isn’t always perfect. New technologies that decentralize data can
improve business adaptability keeping users “synched” with one another. Unlike
typical infrastructures that “scale up” with additional components to increase
performance and backup, these new approaches achieve enhanced performance by
“scaling out,” or in other words, by adding replicate database ‘nodes.’
These new architectural systems store data
automatically across all nodes based on geography, usage and policy; delivering
information where and when it’s needed. With this approach all data, and
associated metadata, is replicated across the multitude of nodes to guarantee
availability. In the event that a single node fails, users are automatically
re-routed to a separate node preventing any lapse in productivity. Once the
original node is backed up and running it will resume participation within the
flow of data and local users are reconnected without ever being aware of the
technical failure. Additionally, organizations have the capability of choosing
where they want these nodes placed – either on-site or in the cloud – ensuring
the appropriate response time for data retrieval.
For production teams in the entertainment industry,
geographically distributed databases can be extremely beneficial because they
have the potential to synchronize metadata automatically thereby enhancing the
efforts of the entire team, regardless of location.
Improvements in
Economies of Scale
An additional benefit of geographically distributed
databases is cost reduction. Database
architectures that allow for scaling out make it much easier to achieve
performance than with traditional systems.
As location-based facilities are added, nodes can likewise be added
either on-premise or in the cloud and as these facilities grow nodes can be
quickly increased to keep-up with an increasing data demand.
As companies in the entertainment industry look for
ways to realize economic benefits of cloud computing and virtualization, it’s
becoming apparent that traditional database solutions fail to effectively take
advantage of the flexibility and performance benefits of these new
technologies.
Organizations in a wide range of industries are
looking for new ways to utilize cloud economics to their full advantage, while
still maintaining control of the data and providing increased support for
users. The industry is ripe for technologies that offer options for deployment
on-site, in the cloud, or a combination of both.
Production crews and content developers need solutions
that provide their collaborative teams access to the most up-to-date data. To achieve this, basic IT architectures need
to incorporate technologies that are resilient and prevent important data from
becoming unavailable or out of synch. Technologies
that enable content developers with consistent delivery methods, available
globally, is the next phase in maintaining an ever-changing, robust
infrastructure managed by tight budgets.
About the
Author:
Frank Huerta is CEO and co-founder of TransLattice, where he is responsible
for the vision and strategic direction of the company. He has been published in
numerous trade publications and is a respected leader in the database
management industry. He has an MBA from
the Stanford Graduate School of Business and an undergraduate degree in physics
from Harvard University cum laude.
No comments:
Post a Comment