Vidispine
How to run Vidispine with high Availability Part 1
To start off with a high availability Vidispine set-up you can easily run Vidispine in cluster on multiple servers using a common database, or using an even simpler set-up with multiple instances running on the same server.
In this first part of the high availability series, we will show you an easy way to increase availability on your Vidispine powered video backend. The other parts will add increasingly advanced set-ups leading to a full high availability video backend. In Part #2 we will add HAProxy, in Part #3 we will add pgpool-II for a high availability database, and in the final Part #4 we show how you can work with SolrCloud.
Clustering Vidispine on multiple servers
A cluster can be created by installing Vidispine on multiple servers and configuring all instances to connect to the same database.
The one setting that should be set is the bindAddress, which an instance will bind to and publish to the other members of the cluster.
cluster:
bindAddress: vs1.example.com
You can also change the address that is published, for example if there’s a firewall with port forwarding rules set up in front of each server.
cluster:
bindAddress: vs1.example.com
bindPort: 7800
bindPortRange: 0
externalAddress: fw.example.com
externalPort: 7801
For this to work you also need to use an external ActiveMQ instance, so make sure that the embedded broker is disabled and that the configuration points to your ActiveMQ instance.
broker:
url: tcp://activemq.example.com:61616
#embeddedBroker: broker:(tcp://localhost:61616)
Quick cluster setup
It is also possible to create a cluster on a single machine by starting multiple server processes each with a different configuration file.
Make sure that all instances have distinct ports. Then start the instances that are to be part of the cluster:
Tail the log and you should see that the processes have found each other and have formed a cluster.