MySQL Setup at Hostinger Explained

0
50
MySQL Setup at Hostinger Explained
MySQL Setup at Hostinger Explained
SEMrush

In Hostinger, we have various MySQL setups, starting with standalone replication-less instances such as Percona XtraDB cluster (later only PXC), ProxySQL-routing-based and even absolutely custom and unique solutions, which I Going to describe in a blog post.

We do not have an elephant-sized database for internal services such as APIs, billing and clients. Thus, almost every decision ends up with high availability as a high priority rather than scalability.

Nevertheless, vertical scaling is quite good for our case, as the database size does not exceed 500GB. Another top requirement is the ability to access the master node, as we have equally similar workloads for reading and writing.

Our current setup is using PXC made up of three nodes without geo-replication to store all data on the client, server and beyond. All nodes are running in the same datacentre.

Read | Hostinger Hosting Reveiw

We have plans to move this cluster to geo-replication clusters in three locations: The United States, the Netherlands and Singapore. This will allow us to warrant high availability if a location becomes unavailable.

Since PXC uses fully synchronous replication, there will be high latency to write. But due to local replication at every location the reads will be much faster.

We did some research on MySQL Group replication, but this requires being close to each other and is more sensitive to latitudes.

Group replication is designed to be deployed in a cluster environment where server instances are very close to each other, and is affected by both network latency as well as network bandwidth.

Read | How Hostinger Manages Web Hosting Services At Unbeaten Prices with Offers

PXC was first used, thus we know how to deal with it in critical situations and make it more available.

In the 000webhost.com project and hAPI (Hostinger API) we use our unique solution above that selects the master node using the Layer3 protocol.

One of our best friends is the BGP and BGP protocol, which is of sufficient age to buy its own beer, so we use it to its fullest. This implementation also uses BGP as the underlying protocol and helps point to the actual master node. We use the ExaBGP service to run the BGP protocol and both announce VIP addresses in any form from the master node.

You should be asked: But how are you sure that MySQL queries go to one and the same instance instead of killing both? We use zookeeper’s short-lived nodes to obtain a lock as mutually exclusive.

Zookeeper acts like a circuit breaker between BGP speakers and MySQL clients. If the lock is acquired then we announce the VIP from the master node and the application sends a query towards this path. If the lock is released, another node can take it and announce the VIP, so the application will send queries without any effort.

mysql setup hostinger

The second question comes: what conditions must be met to prevent the announcement of VIP? This can be implemented differently depending on the use case, but if we release the lock using the system requirement in the unit file of the MySQL process downtime exam:

After this, with or without specifying after =, this unit will be stopped if one of the other units is explicitly stopped.

With SystemD we can create a good dependency tree which ensures that all are met. Stopping, killing, or even rebooting MySQL will cause the SystemBell to stop the ExaBGP process and withdraw the VIP declaration. The end result is a new master selected.

We tested those master failovers during our gaming days and nothing significant was seen.

If you think good architecture is expensive, try bad architecture

LEAVE A REPLY

Please enter your comment!
Please enter your name here