The customer may well be interested in testing the number of users the publishing application can handle, and such load testing is acceptable as long as it does not go on for hours. Even a minute of automated loading of some writing of a work, e.g. with a load testing application available online, is sufficient to get a good feel for it. In a single-server system, it should be possible to load one article with basic resources, e.g. 8000 times per minute with a response time of 70 ms. Considering that e.g. most blog posts on the internet get only a few hundred reads per month, the basic resources should be "just fine" for many purposes. Images of public works will always be loaded via the CDN service, if a prepayment for the CDN service has been made.
Lazy loading method is used for images and videos, i.e. only the parts of a webpage already on the screen and those within 1000 pixels of it have their images etc. loaded. The resolution of the images to be loaded has also been optimised to avoid unnecessarily loading images with too high resolution, both when the page is first loaded and when the browser window is resized. On large pages, resizing the browser window could easily result in up to thousands of image download attempts if the browser window were resized continuously for, say, a minute, so to take into account such a possibility only the image layout is adjusted according to the browser window size and the need to load different images having different resolution is programmatically considered coly momentarily.
One quick way for the customer to reduce the load on the publishing application is to increase the cache time per work, which is 5 seconds by default. This caching means that e.g. one writing of a work is not generated separately for each page load, but the last one is stored on the server in a file which is used for e.g. the next 5 seconds or as long as it is configured to be. It can be up to two hours. The generating of a writing and other parts of a work causes always a moderate load on the server's processor, and there is always the possibility that there will be a lot of page load requests at once. Reading the generated part of the work from a file causes, by comparison, a very low load on server resources.
Load balancing of server hardware and software would not be fully utilized if the Linux server that first receives the incoming signals is configured to be highly restrictive in the outbound bandwidth. That would be done with the Linux's tc command (from "traffic control"), and the reason for its use could be e.g supposed threat that bots would be deliberately programmed to consume a monthly traffic by downloading moderate amounts of different content such as images and pages of works many times over a long period of time. One would probably prefer not to pay several hundred euros extra instead of the expected e.g. ten euros. It might be least stressful to be prepared to pay three times more than expected by keeping the bandwidth only slightly limited and hopefully finding out again and again that nothing bad happened.
Normally, many data centre service providers would charge for outbound data from servers of the publishing application after data transfer volumes exceed e.g. 20 TB, as in the case of Hetzner. This amount, i.e. 20 000 GB, is a large amount for a low traffic instance of the publishing application. To be on the safe side, an additional limitation is applied to the servers of the publishing application on a per-customer basis to prevent the monthly data transfer volumes from exceeding that 20 TB by limiting the transfer rate of the virtual network card of the server to 60 mbps or 7.5 MB/s (see calculation: https://www.wolframalpha.com/input/?i=60Mbps+in+TB%2Fmonth). The portion of the cost that exceeds the specified data transfer limit is referred to as 'overage costs' and the data that goes out of the server is referred to as either 'egress data' or 'outbound data'. If the data centre services are provided by a provider other than Hetzner or if 20 TB is otherwise not the limit for the overage costs, this will be explicitly mentioned before the conclusion of the service agreement.
If the number of visitors suddenly increase, it could make it difficult to log in with a username or do any kind of editing, for example, because the load would make everything very slow. The images in the writing "Changing the network topology of the publishing application" give few ideas about how to change network topology to make it all more resistant to high external load. Other quality of service improvements include simply adding more vCPUs for virtual servers to use or switching to dedicated servers. For a bit more information on these, see the writing "Becoming a customer of the publishing application".
<-- Bank transfer is a quick operation (Hetzner specific example)
Obtaining other kind of SSL certificate -->