The service can be partitioned in many different ways for purposes where there is a significant increase in usage or just a desire to separate the public and private sides, for example, to use different servers.

If there is anything to be understood from these descriptions, it might be that the publication service can be "chopped up" in many different ways, so that different functionality can be assigned to different servers, so that, for example, uploading images to the service (and scaling them to different sizes at the same time) would not place a load on the same server that is used to display the finished works. Reliability can be increased by multiplying the number of servers performing a given functionality and letting the load balancing component decide which one is used at any given time. Database servers and files can be replicated to ensure that even a hardware failure does not interrupt the service, because all data and files would be continuously at least duplicated before the hardware failure.

Other noteworthy possibilities include: a network file system, i.e. different servers can share certain files such as user's images; synchronisation of caching of data retrieved from a database between servers. This is of course only if more than one server is used. The use of more than one server is not necessary at all, i.e. everything needed such as the database management system, image files and the application server can be located on a single server. It is always possible to change resources such as memory, disk space and time slots on the processor (vCPU).

Load balancing of server hardware and software will not be fully utilized if the Linux server that first receives the incoming signals is configured to be highly restrictive in the outbound bandwidth. This is better explained in the writing "A word about limiting data transfer costs and visitor traffic".