Project files

It is possible to save files to a project that may not even have any specific use and which can't be used as straightforwardly as catalog images. Files of any kind can be drag'n'dropped onto the "Files" table, which makes them get saved as project files. These files will gets included in backups, when project is exported and they will also be recreated if the project is imported in the project listing view. On tablet devices operating system's feature called screen splitting can be used, which presumably allows one to drag and drop files from the file manager application.

Next to the files are Download buttons, which allows a file to be downloaded for use if required. At some point, a file preview function will also be available, which could e.g. used to see what is in the zip package. For some file types, you can already view their contents in the File preview panel (e.g. zip files, image files and text files).

CDN files

If, in addition to storing files, you want to be able to share them publicly with others, you can transfer files to an (existing) CDN service, which in this case is Bunny's CDN Storage. The only effort required is to create a Storage Zone in Bunny's settings, and an apikey for it (will be entered in the user settings of the publishing application in the external services settings). There are two types of these apikeys, one read-only and one readwrite. The latter is needed for file transfer using a separately installed FTP application.

In order for files to be visible in the CDN Files table, files accessible via CDN must be located in a directory starting with "project_" followed by the id number of the project in question. The rest of the directory name can be anything. In the CDN Files table, listed on the same line as the files, are urls, which are the publicly accessible web addresses. You can use them however you like, but one should be aware that these are files distributed via the CDN service, so there is a charge for downloading them via the web and hence via Bunny.

This same CDN file storage can also be used to play audio and video files, e.g. by providing a public address referring to them in the text editing view (under the Embeddables tab, Attach embeddable and from the modal window that opens "Video file" or "Audio file"). No need to use a certain kind of directory name, but you should refer to the Bunny instructions on how to form public addresses.

Importing a project using SFTP (SSH File Transfer Protocol)

Since the import of projects directly via the browser is limited to 500 MB, backup files larger than this size must first be moved to a subdirectory of the "importableprojects" directory in CDN Storage (using SFTP client application), which must be exactly the same as the username used in the publishing application. After this transfer, they will appear in the "Importable projects" table, together with the backup-specific buttons "copy from CDN storage" and "import", the former of which will be used to transfer the backup to the server and after which the actual import may begin. The transfer from CDN storage takes no more than a short moment, the importing may take a little longer. It also causes less memory can CPU usage on the server, when doing the importing this way. Depending on the restrictions in Bunny's settings, these files can also be shared publicly, but the contents of the "importableprojects" directory cannot be listed publicly.

File preview function is occasionally enhanced with new action tools, one of the most useful of which is the ability to gather all the text on all pages of a PDF file that has a freeform line drawn next to it (using a PDF reader software). The purpose of these lines is to indicate where something is e.g. useful. The style of the marking (annotation) is not overly restricted as it can also be a line formed by a few back-and-forth movements. It can be right next to text or on the side of a page, as the positions of the top and bottom edges are what matters. A 100-page PDF file is processed in less than a second, after which user is then given the results, which can then be further processed elsewhere.

Converting writing to a document file

The use of so-called office applications can be reduced by taking advantage of the possibilities offered by this publishing application, as documents can be created in other ways, too. The component of the open source software Apache POI is what is used to create a Word file based on the information contained in a writing in the publishing application.

A writing has as optional text parts a Document header and a Document footer, where information (address, document type name and date) can be provided using the syntax that is shown as placeholder hint in relevant textareas. That information is then placed in appropriate positions in the header and footer of the resulting document file, together with page numbering, etc. This information is only used in the writing sending view. The document file is created at the click of a button and is available for further use as a downloadable file.

Sending email based on a writing

It could be that some writings' only purpose is to be available to be sent as email, where the email is sent within the publishing application itself. Might be useful when initiating a conversation or when sending for which reply isn't expected, as the reply messages won't be shown in the publishing application.

Before such email can be send, a SMTP server to use must be set in the user settings. Required details are the address of the SMTP server and the port number it uses, as well as a username and password for it. SMTP stands for Simple Mail Transfer Protocol and it is used only for outgoing email. The information required is the same as what might be needed to configure a separate email application, but not necessarily the same for the username and password if one wants to create separate ones for this purpose. Communication toward the SMTP server is not highly secured as only using a more secure port number gives more benefits in this regard. Email is send from the server of the instance of the publishing application, thus not straight from the browser.

As attachments to emails it possible to choose to send images that are included in the content text of the writing or all the images that are attached to the writing, as well as any of the files attached to the project. For project files, the maximum size per file is 2.5 MB. For images, the largest available size per image will be sent. The content text of a writing will be converted to an unstyled format so that it can be sent as plain text.

The recipients list can be used to send one email to many or each individually, where the email(s) are sent at the click of a button. When sending to multiple recipients, a comma is used as a separator. A transmission delay of 1 second is set between separate emails.

Exporting a writing together with its images

When preparing a writing and its images for publishing elsewhere, such as in a discussion forum or a Facebook group, writing and its images can be exported to a zip package containing three different versions of the writing. One of them is an unstyled HTML version with tags p, h1 and h2 (lists are converted to text paragraphs). The other two are plaintext, with the difference that one does not have a blank line after a text paragraph. Pictureshows' image filenames are named in a way to make it easier to identify which ones belong to the same pictureshow.

The customer may well be interested in testing the number of users the publishing application can handle, and such load testing is acceptable as long as it does not go on for hours. Even a minute of automated loading of some writing of a work, e.g. with a load testing application available online, is sufficient to get a good feel for it. In a single-server system, it should be possible to load one article with basic resources, e.g. 8000 times per minute with a response time of 70 ms. Considering that e.g. most blog posts on the internet get only a few hundred reads per month, the basic resources should be "just fine" for many purposes. Images of public works will always be loaded via the CDN service, if a prepayment for the CDN service has been made.

Lazy loading method is used for images and videos, i.e. only the parts of a webpage already on the screen and those within 1000 pixels of it have their images etc. loaded. The resolution of the images to be loaded has also been optimised to avoid unnecessarily loading images with too high resolution, both when the page is first loaded and when the browser window is resized. On large pages, resizing the browser window could easily result in up to thousands of image download attempts if the browser window were resized continuously for, say, a minute, so to take into account such a possibility only the image layout is adjusted according to the browser window size and the need to load different images having different resolution is programmatically considered coly momentarily.

One quick way for the customer to reduce the load on the publishing application is to increase the cache time per work, which is 5 seconds by default. This caching means that e.g. one writing of a work is not generated separately for each page load, but the last one is stored on the server in a file which is used for e.g. the next 5 seconds or as long as it is configured to be. It can be up to two hours. The generating of a writing and other parts of a work causes always a moderate load on the server's processor, and there is always the possibility that there will be a lot of page load requests at once. Reading the generated part of the work from a file causes, by comparison, a very low load on server resources.

Load balancing of server hardware and software would not be fully utilized if the Linux server that first receives the incoming signals is configured to be highly restrictive in the outbound bandwidth. That would be done with the Linux's tc command (from "traffic control"), and the reason for its use could be e.g supposed threat that bots would be deliberately programmed to consume a monthly traffic by downloading moderate amounts of different content such as images and pages of works many times over a long period of time. One would probably prefer not to pay several hundred euros extra instead of the expected e.g. ten euros. It might be least stressful to be prepared to pay three times more than expected by keeping the bandwidth only slightly limited and hopefully finding out again and again that nothing bad happened.

Normally, many data centre service providers would charge for outbound data from servers of the publishing application after data transfer volumes exceed e.g. 20 TB, as in the case of Hetzner. This amount, i.e. 20 000 GB, is a large amount for a low traffic instance of the publishing application. To be on the safe side, an additional limitation is applied to the servers of the publishing application on a per-customer basis to prevent the monthly data transfer volumes from exceeding that 20 TB by limiting the transfer rate of the virtual network card of the server to 60 mbps or 7.5 MB/s (see calculation: https://www.wolframalpha.com/input/?i=60Mbps+in+TB%2Fmonth). The portion of the cost that exceeds the specified data transfer limit is referred to as 'overage costs' and the data that goes out of the server is referred to as either 'egress data' or 'outbound data'. If the data centre services are provided by a provider other than Hetzner or if 20 TB is otherwise not the limit for the overage costs, this will be explicitly mentioned before the conclusion of the service agreement.

If the number of visitors suddenly increase, it could make it difficult to log in with a username or do any kind of editing, for example, because the load would make everything very slow. The images in the writing "Changing the network topology of the publishing application" give few ideas about how to change network topology to make it all more resistant to high external load. Other quality of service improvements include simply adding more vCPUs for virtual servers to use or switching to dedicated servers. For a bit more information on these, see the writing "Becoming a customer of the publishing application".

The service can be partitioned in many different ways for purposes where there is a significant increase in usage or just a desire to separate the public and private sides, for example, to use different servers.

If there is anything to be understood from these descriptions, it might be that the publication service can be "chopped up" in many different ways, so that different functionality can be assigned to different servers, so that, for example, uploading images to the service (and scaling them to different sizes at the same time) would not place a load on the same server that is used to display the finished works. Reliability can be increased by multiplying the number of servers performing a given functionality and letting the load balancing component decide which one is used at any given time. Database servers and files can be replicated to ensure that even a hardware failure does not interrupt the service, because all data and files would be continuously at least duplicated before the hardware failure.

Other noteworthy possibilities include: a network file system, i.e. different servers can share certain files such as user's images; synchronisation of caching of data retrieved from a database between servers. This is of course only if more than one server is used. The use of more than one server is not necessary at all, i.e. everything needed such as the database management system, image files and the application server can be located on a single server. It is always possible to change resources such as memory, disk space and time slots on the processor (vCPU).

Load balancing of server hardware and software will not be fully utilized if the Linux server that first receives the incoming signals is configured to be highly restrictive in the outbound bandwidth. This is better explained in the writing "A word about limiting data transfer costs and visitor traffic".

When there are many writings in different projects, remembering their content can be challenging, e.g. in a situation where it is meant to refer to several other writings, where a certain topic is covered throughout, something specific has been said, or which could give ideas for a new writing.

It may not always be just good to make writings to be easier to find as from the point of view of managing ones thoughts it might be better to e.g. keep writings written in different time periods or related to different time periods "away" from each other until they are really needed. Groupings might remain the same for a long time and one's mind might be consider them to closely related even when one kind of known that they maybe aren't.

However, searching for useful writings for some purpose might be tedicous and thus in some scenarios there are benefits to preselect writings that one might be want to find quickly later.

In the writing findability groups, the names of groups are limited to a few dozen characters, while the groups themselves may have several descriptions, which can be much longer. These descriptions are those which are intended in some way to characterise e.g. what is to be found in the writings marked on them or what thoughts might be arised.

The search view is a natural place for them to be useful, where the name of a group can be selected from a menu, which then causes a list of its descriptions to appear with links, whose purpose should be guessable.

A potentially useful use for the generated descriptions might be in the "inspiration arrangements" view, where the other kind of predefined elements are adequates items. They can moved around on screen and their positions can be saved like images in the image assorting view. More on this in the writing about it.

Instead of textual descriptions, one could imagine using e.g. just single words and listing them alongside the whole names of writings, but wouldn't this probably leave something significant undefined? And if a user has e.g. 600 writings, would it be "functional" to look at them?

A whole project with its writings, images and other stuff can be wrapped in a zip package with a single button press in the project managing view and if needed, the same zip package can be dragged to the project list in the project list view, where a new project is generated from the contents of that zip package. Please note that this will also regenerate all the image catalogs used by the project, i.e. no attempt will be made to combine the new project with any existing image catalogs in the service. The possibility to put multiple projects in a single backup without the backup containing the image catalogs used by those projects multiple times is also available for use, but a user might want to consider how much server load it might or does cause.

Backups generated in this way can be drag'n'dropped into the project list view as usual (image catalogs would not be duplicated even if multiple projects use the same ones). If one presses the Shift key during drag'n'drop and targets certain project, no new project will be created, but the solutions of the backupped project will be merged with the target project (including image catalogs). This is useful, e.g. when one wants to create one big project from solutions that are spread over several other projects.

The disk space resources available to each customer are also used by the operating system, under which all other relevant software is installed, as well as the images and project files stored by users. During the image upload phase, images are scaled to several different sizes, with the largest ones taking up a megabyte or more, depending on the image format. Project files are files that users can save to a project and that are also included in backups. When making a backup, disk space is also needed to create a zip file. The operating system reserves approximately 4 percent of disk space for its own use so that it can continue to function even when the user has used up the available space on e.g. an 80 GB partition of the SSD device. In the early stages of using the publishing application, the user would have approximately 60 GB available, which is enough to prevent the disk space from running out immediately just for storing images.

If necessary, customers can request an increase in disk space, for which there are two different options, both of which have in common that the disk space cannot be reduced back to its previous size. In the first option, the amount of disk space is increased by choosing a more expensive option from the price list (Hetzner), which may also include more VCPUs and RAM (e.g., 160 GB of disk space cost 15.67 eur/month at the beginning of 2026).

In the second option, disk space is allocated separately so that one can start with e.g. 10 GB and increase it as needed up to 10 terabytes. At the beginning of 2026, the price of this "block storage volume" was 0.55 eur/month for the first 10 GB, 5.52 eur/month for 100 GB, and 565.45 eur/month for 10 TB.

Pricing for block storage volume is hourly, meaning that billing ends when stopping using the disk space. However, since it may contain images and other files, when one wants to stop using it, images and other files need to be moved to the "basic disk" using the publishing application's user interface or, if there is not enough space, delete them. The user settings of the publishing application include a mode that provides information (in a few places on different views) on which disk the images and other files are physically located on, as well as functions for transferring files from one disk to another. These have been designed to be easy to use, so that the user can easily understand what they are doing.

When it was considered that users should have a possibility to to use additional disk space, the Logical Volume Manager (LVM) was thought about as it makes two different disks appear as one to the operating system, but it could fragment different parts of the files across different disks, making it difficult to remove the additional disk later. Thus, a different approach was choosen, and in practice, the additional disk appears to the operating system as a separate directory, which makes it easier to use in a network topology where different components of the publishing application are used on different servers and where the use of a network file system (NFS) is required.

Even if images have already been placed in the writings, it does not prevent moving of images from one image container to another or the whole image container to a different image catalog, since images are referred to by an id code which does not change when the images are moved. The view includes switch buttons to change the possible destinations of images and captions, so that in one position the transfer is only possible within the limits of a project and the image catalogs associated with a project, but since sometimes one wants to move writing or images elsewhere, too, that has been made possible. In addition to moving writings, they can also be copied.

Selecting any writing particular has the useful effect of marking the images in the image container that opens as ready for transfer, which are the images used as writing particulars in the opened writing. Transferring images to another image catalog in this view requires the use of a temporary image container unless an existing one may be used to move images. Alternatively, images can also be transferred in the image assorting view.

Side projects for a user can be identified by "(side project)" notation added to their names. If one wants to move something to a side project, the destination catalog will not be visible unless currently in the moving things view of that project. Same applies to writing collections. The writing collections that are the sources for a transfer are also only visible if in in the moving things view of the side project.

Other means of copying project elements are the "overwrite writing" and copy/paste functions that are available in the text editing view. The "image assorting" view (described in the writing "Freeform image assorting") could also prove to be beneficial.

After one have written enough and/or have not written for a long time, the search functionality can be useful to find your writings. The occurrence of a search term is highlighted in both the headings and the body text of search results. Around the occurrence of the search term, some text from related writing is shown before and after it. The writings that appear in the search results are linked primarily to the text editing view. If a writing has been published, a link to it will also be available. These links can also be useful when making redirectlink writings.

The limitation of searching the text of a writing is that the search is currently limited to the HTML version of the text, meaning that styling that is made to the text might cause something to be not found even if it appears to be in the text.

In this view, you can initialise a search by adding the parameter "seachquery" to the url and giving it a search criterion. If the search criterion begins with "author:", it is assumed that the text following it refers to an authorid. These are created in the Authors section of the user preferences, which is an experimental feature. Writing finding groups are first created in a separate view and can then be selected to list preselected writings in a project-independent way (requires experimental features mode to be turned on).

Searchability of the names of the image containers may become useful when there are already several dozens of image containers. It is also possible to search for the names of adequates, and separately for the adequate items. Writings can also be searched using writing collection id (e.g. writingcollectionid:2555). For writing finding group descriptions there's three additional functions, two of which allows to list related writings in either the slicedtextediting view or the lotsoftextediting view. The third generates a printable webpage combining all its writings and where image data is embedded within the webpage.

You may want to do some operations on several writings at a time, such as hiding parts of a writing that are not needed (e.g. ingress), replacing odd-looking quotes with more common ones, or styling multiple writings to look similar. Doing these things one by one would be quite a clunky job. Other benefits include the ability to see the dates of all the writings in a work at once, and if needed, change them. From the image you might guess that changes can only be made to a collection of writings at a time, but it should be mentioned here that holding down the Ctrl key while selecting a collection of writings from the menu does not replace the previous listing, but adds them to the list. The header fields in the table can used for sorting in a predictable way.