Technology Hosting Our Websites

We have based our decision on a few factors and to be honest have learned the hard way about service and quality of support.  Our standards were not being met by the first hosting provider and at significant cost we seamlessly migrated our clients to the current partner.

The cost factor is significant but we have also decided on some other more important criteria.

  • Availability
  • Performance
  • Support
  • Flexibility

Our partner offers unique Cloud platform clustered over multiple servers and is built from the ground up to offer high performance, reliability and flexibility.

Load Balancing

When a user visits your website, an advanced layer 7 load balancer despatches that request to one of our backend application servers. These process your website's code, whether that is static HTML, PHP, Perl, ASP, Ruby and feed that data back to the load balancer. This all happens seamlessly and on a per request basis. This means that, should a site receive a large influx of traffic due to being featured on TV, for instance, each webserver only sees a modest increase in traffic and continues to perform normally. This is in contrast to traditional hosting systems which would quickly become overloaded. How often have you been watching Dragon's Den, heard a website mentioned, tried to visit it and found it to be inaccessible? That wouldn't happen with our system as such a traffic surge is a drop in the ocean compared to the traffic volumes already being processed.

Application Servers

Your frontend web application is served by one of our dedicated application nodes, chosen in a "round robin" fashion every time a request is made to your website. Packed with RAM and CPU power, these servers only deal serving web pages and aren't hampered by providing other services such as email, DNS, etc.

Quality of Service

Unlike almost any other hosting provider, we don't limit overall CPU time at all. However, we do have sophisticated systems in place which will prevent a single user monopolising resources across the cluster, either through a highly inefficient application or an attempt to do batch processing (or anything more nefarious) on the front end web servers. Such processes are detected and killed within 60 seconds ensuring that the cluster is always fast, all of the time. For users who do need to perform background processing (such as image encoding, video encoding, database imports, etc) we provide specialised servers for this with full access to your webspace at no extra cost.


We take multiple daily 'snapshots' of your data. This captures changes and enables us to recover a file from almost any point in the last 30 days. In addition, each storage device is replicated not once but twice to prevent data loss due to hardware failure.


Database servers are a special case. They need 2 things to perform well; disk i/o and RAM. We don't use network storage, instead opting for fast, local, SAS disks in RAID 10 configuration or, in some cases, solid state storage. Each database is dumped to an SQL file in your home directory once per day and that data is then replicated in the normal way. If you require more frequent SQL dumps, this can be set up via your control panel.

Power and Network

All our hosted service benefit from uninterruptible power and the cluster is no exception. Every server is fed from the primary building UPS, a large battery bank which 'smoothes out' any fluctuation in the mains voltage and provide standby power in the event of a loss of utility power, allowing time for the generators to start up. Once started, the generators can power the datacentre indefinitely in the case of a large scale power outage.

In addition, storage nodes benefit from dual power supplies and are fed from a secondary UPS, keeping things live and your data safe if the primary building UPS were ever to fail.