DATA Back-up and failover

Security system ‘availability’ jargon buster

Duncan Cooke

Business development manager, UK & Europe , Stratus Technologies

December 11, 2017

Sign up to free email newsletters

Download

Exclusive download: The smart door locks report 2018

Does it really matter if software applications supporting your security systems are only available 99% of the time?

Probably not if you’ve installed a video surveillance system primarily to deter shoplifters. But the loss of what equates to more than 90 minutes of unplanned downtime per week will be significant if you’ve invested in an integrated, mission-critical security solution.

There is no shortage of solutions available to minimise disruption if a server fails or you have to recover from a cyber attack. Here is a jargon-busting overview of the best of them.

Back-up and restores

A standard x86-based server typically stores data on RAID (Redundant Arrays of Independent Disks) storage devices. The capabilities of x86 servers range from vendor to vendor and support a variety of operating systems and processors.

However, a standard x86 server may have only basic backup, data-replication and failover procedures in place, which leaves it susceptible to catastrophic server failures.

Standard servers lack protection for data in transit, so if the server goes down, this data is also lost

A standard server is not designed to prevent downtime or data loss. In the event of a crash, the server stops all processing and users lose access to their applications and information, so data loss is likely.

Standard servers lack protection for data in transit, so if the server goes down, this data is also lost. Though a standard x86 server does not come from its vendor as highly available, there is always the option to add availability software following initial deployment and installation.

High availability

Traditional high-availability solutions that can bring a system back up quickly are typically based on server clustering: two or more servers running with the same configuration, connected with cluster software to keep the application data updated on both/all servers.

Servers (nodes) in a high-availability cluster communicate with each other by continually checking for a ‘heartbeat’, which confirms other servers in the cluster are up and running. If a server fails, another server in the cluster, designated as the failover server, will automatically take over, ideally with minimal disruption to users.

Computers in a cluster are connected by a local area network (LAN) or wide area network (WAN) and managed by cluster software.

Failover clusters require a storage area network (SAN) to provide shared access to data required to enable failover capabilities. This means that dedicated shared storage or redundant connections to the corporate SAN are also necessary.

While high-availability clusters improve availability, their effectiveness is highly dependent on the skills of specialised IT personnel

While high-availability clusters improve availability, their effectiveness is highly dependent on the skills of specialised IT personnel. Clusters can be complex and time-consuming to deploy and require programming, testing and continuous administrative oversight. As a result, the total cost of ownership is often high.

It is also important to note that downtime is not eliminated with high-availability clusters. In the event of a server failure, all users currently connected to that server lose their connections. Therefore, data not yet written to the database is lost.

Fault tolerant

Also referred to as continuous availability solutions, a fault-tolerant server provides the highest availability because it has system component redundancy with no single point of failure. This means that end users never experience an interruption in server availability because downtime is pre-empted.

Some 67% of best-in-class organisations use fault-tolerant servers to provide high availability to at least some of their most critical applications.

Fault tolerance in a server is achieved by having a second set of completely redundant hardware components in the system architecture. The server’s software automatically synchronises replicated components, executing all processing in lockstep so that ‘in flight’ data is always protected.

The two sets of CPUs, RAM, motherboards and power supplies all process the same information simultaneously. Therefore if one component fails, its companion component is already there and running, so the system keeps functioning.

Fault-tolerant servers also have built-in, fail-safe software technology that detects, isolates and corrects system problems before they cause downtime

Fault-tolerant servers also have built-in, fail-safe software technology that detects, isolates and corrects system problems before they cause downtime. The operating system, middleware and application software are therefore protected against errors. In-memory data is also constantly protected and maintained.

A fault-tolerant server is managed exactly like a standard server, making the system easy to install, use and maintain. No software modifications or special configurations are necessary and the sophisticated back-end technology runs in the background, invisible to anyone administering the system.

In business environments where downtime must be kept to the absolute minimum, ensuring you have fault-tolerant systems will give you peace of mind that crucial data won’t be lost.

Free Download: The State of Surveillance Storage

From the growing quantity of data to new innovations like Artificial Intelligence (AI) and machine learning, the surveillance and security landscape is changing. The Seagate Surveillance Storage Survey 2018 is a look at what the industry challenges really are—and what businesses, security industry professionals, installers and integrators need from their storage moving forwards. Discover the challenges now by clicking here.

Related Topics

Leave a Reply

Be the First to Comment!

avatar
  Subscribe  
Notify of