After much browsing of online dictionaries, it has been decided to name the images, videos, tweets, image collages, etc. that can be attached to the writings as "particulars". Images are the only ones that can be stored in image catalogs for use in a number of different writings. In terms of workflows, placing particulars to an writing requires both attaching them to the article (text edit view) and including them (writing fine-tuning view). Not all of the particulars may even be used, but only those that are actually included. Particulars may be referred to by placing a ref marker.
It is a deliberate choice to make particulars in the text that is been edited to appear in the form of placeholder elements that symbolize them and which in the case images have image id number and given image width. The position of these placeholder elements can be changed either by clicking on to select it and then moving it by arrow keys, with the Ctrl key held down (one paragraph of text at a time) or, perhaps more easily, by placing the cursor where you want it and then clicking on the placeholder element with the Ctrl key held down.
A quick preview version is available for any particular type by hovering the mouse pointer over the placeholder element. Alternatively, placeholder elements can be made to appear more visually (a function in the text editor's Misc menu), so that e.g. images look like images in the editable text (but without any fine-tuning).
Selectable images in an image catalog (its individual container) are displayed in the modal window of the text editing view, although they could of course be displayed in a different way. If an image has Source information, it will also be shown in this modal window, because sometimes the clearly differing part of many similar-looking images is in it.
When adding either separate images or pictureshow images, there is a button or buttons in the modal window to make use of the image selections made in the "image assorting" view. Button "Add from clipboard" add individually each image that is referred on the clipboard or adds them to the "idea" of a pictureshow.
It is possible to change the order of the images attached to the writing in the text editing view, but this is only for convenience at the time of writing, i.e. it does not affect the order of the images in the text. The captions of the images are displayed with the images attached to the writing.
When a highres version of an image is stored and available, a visual indicator is displayed in various contexts. In some contexts, this indicator is revealed by the user and in others it is constantly visible. In the text editing view, for each image in a writing there are a few "action tools", of which the button "To pictureshow" includes that image to the preparation of a pictureshow, the effect of which can be seen when that modal window is opened, which is used for attaching a pictureshow (bunch of images) to the writing.
Images are one kind of particulars which can use text styling in their captions. It is possible to have a list of attributes (using bold style) and their values listed one above another by putting them on separate lines and having two columns between an attribute and its value. These can be used to e.g. define or characterize different types of particulars. Also works with images in a pictureshow. Caption texts, if not using mentioned attribute and value pairs, can have text string "_originated_" replaced with image particular's "Source" value and which will then be shown in small caps using font whose letter spacing is slightly looser than usual. Similar styling can be defined to be used on other parts of the caption text by inserting two "_" charactes to where from styling should beging and where it should end at.
Some other specific types that can be attached are videos (YouTube, Vimeo, TikTok, streamable video files, maybe others), tweets, SoundCloud music, streamable audio files, coding examples (CodePen and JSFiddle at least), podcasts (about thirty different hosting services), etc.
When using a direct link to video or audio, player's style is currently based on what browser offer by default. It is advisable to place the files to be streamed to be available through the CDN service that is already in use (files can be transfered using SFTP.
A bunch of images is classified as a third, completely different type of particular and which can include several images from one or more image catalogs, which can then be used to make a image slideshow or any other type of presentation available in the writing fine-tuning view. Each of these images can have its own caption in addition to the optional caption for the image collage. The order of the images can be varied when forming the image collage. In the modal window, wherein images meant to be used in a pictureshow are show and orderable, there's also a slider with choices 1 - 5, which can be used to select how many images to show per image row.
In the text edit view, the attached image collages are displayed next to the text as a set of images. Attached image collage can be used as a basis for a new image collage. The captions for these separate images are displayed in the modal window, when they are about to be added or just after pressing replicate button. Otherwise their existence is only known (in the text editing view) by textual reminder mentioning amount of captions if there's at least one caption.
Using maps in your writing is a way to clarify location. An external API is used for reverse geocoding purposes to convert e.g. a given location like a city name into map coordinates. There are a few different choices as a map service to use, but the use of map services may be subject to a fee up to a certain level of use and that the number of access times per some time period may be limited.
Making a VTT file from an audio recording is a form of phonetic transcription, or more precisely, transcribing. There are several different online applications for this purpose, but not all of them recognise the Finnish language. They vary widely in pricing, some offer some free transcribing time per month and are very likely to produce different quality.
One could use Google's Speech-to-Text API, which can be accessed directly from the Google Cloud console using a graphical user interface. Basically, an audio file is given as input, a few choices affecting quality is made and then a short moment is waited through. After that download option becames available which can be used to retrieve a SRT file containing the transcribed audio. SRT files are almost identical to the WebVTT files, both being human- and machine-readable text files. Howevery a SRT file needs to be converted to a VTT file, before it can be used. This conversion requires a separate application, which can be a Windows application or, alternatively, one can use any of the many conversion services available on the web (try searching with "convert srt to vtt").
To the publishing application, user does not need to provide any other input other than the url to audio file and url to the vtt file. These are used to generate an audio player, which displays an interactive transcription below it. Here interactivity means that when clicking part of transcribe text, player changes the position where from audio continues to play. As a listener progress through the audio, part of the transcribe text indicates which part of the audio user is currently listening to.
The VTT file must be located in a place that allows distribution of such files either everywhere or to the server used by the publishing application. It is recommended that both the audio file and the vtt file are placed on the CDN Storage provided by the CDN service you are already require to use, as the relevant Cross-Origin Resource Sharing (CORS) settings are easily found in its configuration.
Enabling the transcribe feature requires turning on the experimental functions in the user-spesific settings. Created audio-like particulars remain intact even after the experimental functions are turned off meaning that they will also e.g. get put to backups. In this case, turning on/off the experimental functions will practically just show/hide some interface elements.
The particular browsing view is only for those particulars that are located as images in image catalogs. The selected project limits which image catalogs are selectable in this view. An image catalog already assigned to a project can be safely removed from the assigned ones, as it does not affect e.g. images already attached or included to a writing or writings, as their visibility and functionality is based on the ids they have.
The width of the browser window automatically affects the grid size wherein images are shown. Up to a maximum of five images per row. Below images are internal links to where images have been used (writings and pictureshows) and near them are the read-only descriptions and sources (hideable if too obtrusive), which can separately be emptified by Ctrl clicking. An url as a source information gets automatically converted to a link, when shown in a image grid (in the views particular browsing and combined text editing). If a link is dropped on an image in the grid (e.g. from the operating system's side, from browser's bookmarks or from browser address bar), then title and url of the link will be used as description and source information for the particular image. Also, as more info gets fetched about the link, its availability date will be added to the description information using certain syntax. That date is utilized when image container is momentarily sorted by date. This probably sounds overly complex, but sometimes there are reasons for such.
Hovering mouse pointer over an image reveals tool buttons such as Details, Crop, Remove, Reduce images, Usable, Load Image and "Large Preview". Opening a large preview image also gives access to the OCR function, which is an external service that tries its best to recognize text in images and provide the letters, words and sentences that it founds as plain text. After using it, it is also possible to make a selection from the recognized text and make a web search to retrieve the first three search results using either the Google Search or Brave Search (their APIs). Those results will be shown in the same modal window. Ctrl-clicking on the recognized text will copy all of it to the clipboard. When using Google Search, the metadata in the search results can be used to save date, title, web address, etc. in the description and source data of the image.
The Reduce images function means that all image sizes larger than 640 pixels that were prescaled during the image upload phase are removed. If the Ctrl key is hold down while using it, the image size of 1024 pixels wide is also left undeleted. Crop is also available in the upload phase, but after the upload it is also possible to create multiple new particular images from a selected area of the selected image, which is useful if, e.g. when several areas of an image might work better as separate images.
To make it easier to browse through images of selected container, the "Large preview" image can be changed using the arrow buttons. Also applies to the "Crop images" modal window. Names of image containers of a catalog can be copied to the clipboard, if such is felt as useful. Images can be marking as "usable" in two alternative styles, which both have a "used" version (overstriked). These markings can also be applied in the image assorting and combined text editing views. They are visible in most views where particular images are listed. In the particular browsing view they can be used to filter what images should be show by using relevant switch. Another switch is used for showing those images, which aren't used anywhere.
Action tool "Usable" has two different purposes, one being revealing the modal window for showing the image with all of its different sizes when Ctrl-clicking. Later there will also be a possibility to enhance image (implemented already, but effects haven't been choosen yet) by selecting an image effect. Enhanced version is firstly created as a new image particular, but it can also be used to completely replace the image of the particular that is currently selected. Original and enhanced version can be compared neatly before choosing to do the replacing and which would also remove the enhanced image particular so that it wouldn't be needed to remove it separately. It is defined as an experimental feature, but only by the lack of different image effects.
Understanding which image catalog and which image catalog container the image is in can be forgotten, and one might not want to bother searching for it by navigating through many navigational moves. It is easier to click on a link that takes you to a particular browsing view and where relevant image catalog and its container are already opened. As an example, here is a link below the image displayed in the writingparticular panel of the text editing view. The images in the special pages, adequates and writing fine-tune views also have similar links in use.
If the description of an image particular has a date as first characters, using certain syntax and on its own text line, such dates will be utilized to momentarily sort images of a container by date.
Vertically tall images may be more convenient to scroll through if they are placed side by side and the mouse wheel can be used to scroll through them horizontally. There is a possibility to momentarily set it up in in the particular browsing view.
If images in an image container has images with greatly varying heights, some of which may be very tall, one might want to browse them by momentarily limiting their maximum height to 1400 pixels. Visually images that are vertically taller than that will look like they have been "torn across" at the bottom. This function is available in all views where images are placed on a grid. This function also causes only the part of the images that will be displayed to be downloaded from the server.
Method named "lazy loading" is used often, which means that only those images (and e.g. videos) that are meant to be displayed on some moment are loaded from the server or CDN, i.e. scrolling up or down the page causes loading more to be displayed. This applies e.g to browsing image containers, but also to all images in public solutions etc.
In this alternative particular browsing view image-type particulars are first presented in the usual way, laid out on a grid, after which one can start moving them around in the view without any special movement constraints. A large tablet device or a other touchscreen device with a stylus pen becomes a practical combination, as the range of hand movement can be much less than with a mouse.
On a touchscreen device there can could be a feature allowing to grab an image with two fingers and rotate the image while perhaps also resizing it. However, such may be a bit too "gimmickry" without much benefit.
This view also allows moving images from one image container to another by holding down the Alt key while clicking on an image and (before that) pre-selecting the image container that should be used as a target container to move images to. Targetable image containers can be removed from list of targetable image containers by clicking on them while holding down the Alt key. When using mobile devices, a separate physical keyboard can be used, or one choose to show a switch button corresponding to the Alt key (setting available in user-spesific settings, applies to certain views).
If target image container is not selected, clicking on an image with the Alt key opens the Large Preview modal window, which also includes OCR functionality and the option to change to the next or previous image with the arrow keys. As in the particular browsing view, the Large Preview image can also be displayed by not fitting it in the browser window's viewport, but by displaying it as large as possible. The only difference is that the other required key press is different, as one would have to press the Shift key along with the Alt key when clicking on an image.
When using a keyboard, clicking on an image with the Shift key makes the image increase in size visually. Pressing the Shift and Ctrl keys together decreases image. This doesn't affect the actual resolution of an image. Four different image sizes are available.
On the screen several images can be moved at a time by first clicking on them while holding down Alt and Ctrl simultaneously. Selections made in this way are not removed when moving unselected images or when returning from the modal window, but only when the background or selected images are clicked again. Selected images can be completely deleted from the image container by pressing the Delete key. Their essential information can also be copied to the clipboard for use in some other views and wherein selecting order affects the order on the clipboard also.
From the image selections one can create new particular images or pictureshows attached to a writing. This can be done by first using the "copy image selections" and then going to the text editing view, where, using either of the two modal windows for adding images, one can use the image selections by pressing the "Clipboard replace" button. In the case of inserting individual images, each separate image selection will then cause the corresponding image to be attached into a writing. In the case of pictureshow insertion, the user will have a change to set the order of images before attaching a pictureshow into a writing.
After moving images around on the screen their size and placement can be saved and reloaded later. When saving, the width of the browser window at the time is also saved, so that the arrangement of images in a browser window having different width can be adjusted to make sure nothing get places out of boundaries of the browser window. These layouts are reset on a per-image basis if they are moved from one image container to another.
Usable markings can be made as in the particular browsing view. For the sake of clarity, the showing of the images being in use somewhere is presented differently in this view as relevant links are presented as letter symbols place over images instead of e.g. writing names. The images can have markings like "highres", "usable" and those in-use ones at the same time. Description information of images can be set to be shown on one line in the limits of visible widths of images or on multiple lines, also limited by visible widths of images.
The infinite type of viewport of this free-form image browsing is one where images can be spread out beyond the visible area and where you can move the viewport by grabbing the background of the view. It allows you to open images from multiple image containers. It could be used to develop a viewport where you can freely browse images and also edit text, but for now it is an experimental feature that can be enabled in the user preferences. One problem with it is that if you use gestures-type plug-in in the browser, they may lose their ability to draw a visual line in the right place, since their functionality often relies on them editing the displayed web page by adding code to it. In addition, some browsers on many devices do not quite allow the viewport to be infinite.
This screen contains the "drag and drop" area for uploading image files to the server. Images can be imported one by one or multiple ones at a time (serially uploaded). Entire projects with all their images etc. would be imported in the project listing view. In the user-specific settings, there is a setting that causes all the acceptable image formats to be saved in non-destructive PNG format or JPEG format. User settings also have a possibility to select if the uploaded images should be scaled mostly by using bicubic or mostly bilinear interpolation (algorithms Lanczos tai Mitchell may also get used). Individual images can be replaced in the particular browsing view.
In the "Writing fine-tuning" view, you can place a particular to the writing by pressing a "include" button of one of the particulars in the "writing particulars" list. A reference to a particular can be placed with a "ref" button. Numbers for captions are generated automatically (the presence of a particular type in caption is optional). The placement of an included particular can be changed by selecting a location in the text where it should end up and then Ctrl-clicking on the placeholder of the particular that you want to move. The fine-tuning adjustments for a particular can be accessed by selecting a placeholder of a particular (either by selecting it directly or by moving cursor on it with the arrow keys). From the particulars attached to a writing one can also select one "main image", which has its own separate adjustments. When images are deleted from the writingparticulars that are already in the text, but without removing placeholders at the same time, one doesn't need to delete these placeholders separately. Instead they can be cleaned away be just reloading the writing.
PNG images have the possibility to use an alpha channel, which means that the image can be made transparent (fully or partially) using e.g. a graphics editor software. In writings's adjustments there's a setting that makes writing's text to follow the edges defined by that alpha channel.
When images are deleted from the writingparticulars that are already in the text, but without removing placeholders at the same time, one doesn't need to delete these placeholders separately. Instead they can be cleaned away be just reloading the writing.
Writings' content text may have references to images that do not appear in the public solution. Such can be achieved by first selecting part of writing's text in the text editing view and then clicking on any "text connect" button that the attached particular images have. That converts the selected text into a link, which when clicked on, opens a particular browsing view in a new browser tab, wherein relevant catalog, image container and the image itself, shown in a modal window, get opened. Attached image particulars in the text editing view would already have links to the same view, the only difference being that only the relevant catalog and the image container would be opened. The marking can be removed as any other text styling.
Included particulars of a writing that is been edited are by default presented in the form of a textual symbol, where memorability is aided by a preview version that can be shown by hovering the mouse pointer over it. An alternative would be to implement a text editor where everything is always presented as it appears publicly, but embeddable particulars that get loaded from elsewhere often do not load immediately. It would be a bit of a slow down and would meaninglessly direct attention to how something is still taking shape while thoughts should be focused on the ideas related to the writing.
That's why so-called "better placeholders" mode is selectable from the editor's Misc menu. Available in views "text editing" and "writing fine-tuning". Location of any placeholder element can be changed as usual in both editor modes, i.e. first the cursor to some position in a writing (preferably between text paragraphs) and then Ctrl-clicking on some placeholder.
At an earlier stage, the ABBYY Cloud OCR SDK was going to be used for text recognition of image scans, and it worked so reliably for all images containing mainly text that it was already in production, but at some point it was simply removed from ABBYY's offering, although it is still in use by previous customers.
After trying out many alternatives, two different ones have been chosen, the first being Google's Vision API and, if it is not available (e.g. no apikey), OCRSpace. Google is free up to 1000 uses per month. OCRSpace is free continuously, but there is a limit of one megabyte for image files, unless you buy a monthly subscription. Use of Google Vision API requires a credit card, but there is no charge if usage does not exceed a certain level. Google API can be set to be restricted, such as it being available only for use with the Vision API and only within the limits of a specific website.
Google's Vision API can be used to e.g. quickly and easily read old receipts. Texts from books, magazines and websites are also very likely to be recognised in a very usable way.
Other alternatives that have been tried include Amazon and Microsoft's ones, as well as api4ai, but pricing and deployment issues made Google's Vision API the preferred option. Microsoft's service might have been a good choice, but when used via Eden AI, it could often, e.g. fail to see anything where "everyone else" recognized the text just fine. On the other hand, one could easily make ABBYY OCR SDK to produce extra letter artifacts (e.g. "anomalies apparent during visual representation"). After trying to use Microsoft's Azure AI Vision directly, interest was terminated by the notion that it began requiring organizational-level approval for username access or something like that.
OCR functionality is enabled in the "particular browsing" and "image assorting" views when an image is being viewed in the "Large preview" modal window. The detected text can easily be placed on the clipboard by clicking it or press the Ctrl key while selecting part from the resulting text to do something with it, such as just copy it or use it to search for information from the web. In the second case the resulting text is automatically placed at the end of the item's textual content. Recognizing text from an image is usually a very fast operation when using the Google Vision API, as it takes about a second. OCR buttons are only visible, when either of relevant apikeys is set.
ABBYY's text recognition seems useful as it just doesn't leave much to complain about, when using it to images containing scanned texts or screenshots of webpages. As parameters it is possible to define in what languages texts are to be found.
444 VIL THE MECHANISM OF TIME-BINDING of it can be found by analysis practically everywhere. Our problem is to analyse the general case. Let us follow up roughly the process. We assume, for instance, an hypothetical case of an ideal observer who observes correctly and gives an impersonal, unbiased account of what he has observed. Let us assume that the happenings he has observed appeared as: O, and then a new happening ( occurred. At this level of observation, no speaking can be done, and, therefore, I use various fanciful symbols, and not words. The observer then gives a description of the above happenings, let us say a, b, c, d, . . . , x; then he makes an inference from these descriptions and reaches a con- clusion or forms a judgement A about these facts. Wc assume that facts unknown to him, which always exist, are not important in this case. Let us assume, also, that his conclusion seems correct and that the action A" which this conclusion motivates is appropriate. Obviously, we deal with at least three different levels of abstractions: the seen, experienced ., lower order abstractions (un-spcakable) ; then the descriptive level, and, finally, the inferential levels. Let us assume now another individual, Smiths ignorant of struc- ture or the orders of abstractions, of consciousness of abstracting, of s.r.; a politician or a preacher, let us say, a person who habitually iden- tifies, confuses his orders, uses inferential language for descriptions, and rather makes a business out of it. Let us assume that Smith, observes the 'same happenings’. He would witness the happenings O, |, ..... and the happening would appear new to him. The happenings O, be would describe in the form a, b, c, d, . . . , from which fewer descriptions he would form a judgement, reach a conclu- sion, B; which means that he would pass to another order of abstrac- tions. When the new happening occurs, he handles it with an already formed opinion B, and so his description of the happening ( is coloured by his older s.r and no longer the x of the ideal observer, but B(x) --- y. His description of ‘facts’ would not appear as the a, b, c, d, . . . , x, of the ideal observer but a, b, c, d,..., B(x) = y. Next he would abstract on a higher level, form a new judgement, about ‘facts’ a, b, c, d, . . . , B(x) =y, let us say, C. We see how the semantic error was produced. The happenings appeared the ‘same’, yet the unconscious identification of levels brought finally an entirely different conclusion to motivate a quite different action, A diagram will make this structurally clearer, as it is very difficult to explain this by words alone. On the Structural Differential it is shown without difficulty.
HIGHER ORDER ABSTRACTIONS 445 Seen happenings (un- IDEAL OBSERVER SMITH] speakable) (First order abstrac- tions) ............. Ik-5 .X Description III! I I I! I ( Second order abstrac- tions) ............. a, b, c, d, ... x a, b, c, d,... B(x)=y Inferences, conclusions, iqB and what not. I (Third order abstrac- tions) ............. A c Creeds and other se- I I mantic reactions.... A' c I Action A9 e Let us illustrate the foregoing with two clinical examples. In one case, a young boy persistently did not get up in the morning. In another case, a boy persistently took money from his mother’s pocketbook. In both cases, the actions were undesirable. In both cases, the parents unconsciously identified the levels, x was identified with B(x), and con- fused their orders of abstractions. In the first case, they concluded that the boy was lazy; in the second, that the boy was a thief. The parents, through semantic identification, read these inferences into every new ‘description’ of forthcoming facts, so that the parents’ new ‘facts’ became more and more semantically distorted and coloured in evaluation, and their actions more and more detrimental to all concerned. The general conditions in both families became continually worse, until the reading of inferences into descriptions by the ignorant parents produced a semantic background in the boys of driving them to murderous intents. A psychiatrist dealt with the problem as shown in the diagram of the ideal observer. The net result was that the one boy was not ‘lazy’, nor the other a ‘thief’, but that both were ill. After medical attention, of which the first step was to clarify the symbolic semantic situation, though not in such a general way as given here, all went smoothly. Two families were saved from crime and wreck. I may give another example out of a long list which it is unnecessary for our purpose to analyse, because as soon as the ‘consciousness of abstracting’ is acquired, the avoidance of these inherent semantic dif- ficulties becomes automatic. In a common fallacy of 'Petitio
Nobody likes it when possibly nostalgic places that may have been very significant to someone are presented as photographs in an online city-specific discussion group, to the extent that one might get surprised that they do not have time to relate their feelings about the present and their memories of the past to the kind of intrusive photographs, whose content and the feelings they evoke may not match the observer's personality at all. Hopefully, when the emphasis is not on location or temporality, the response is less likely to be so reluctant. Thus, these photographs taken a couple of decades ago could well be used to demonstrate text recognition from photographs.
However, it turns out that the usefulness of text recognition from photographs leads to the feeling that training of some artificial intelligence component might be required. The Cloudinary OCR addon used here is actually using the Google Vision API and, according to its documentation, cannot be given any parameters to guide text recognition if the texts consist of only Latin alphabets, i.e. the analysis results are the best available. The original images used in the analysis are 2015 x 1512 pixels. Google's Vision API also returns information about where in the image each text is found, which Cloudinary uses to automatically highlight those parts of the images where the analysis indicates that text is present.
In the "AI image search" view images can be searched by text after AI analysis of images. The images found are displayed in the same way as in the particular browsing view and can be drag'n'dropped for use in writings.
Before images can be searched using text they must first be analysed by an AI, of which there are several options to choose from. The AI analysis per image is at this stage intended to work in such a way that there is no need to send the images to another service and instead all the analysis is done on the server(s) that the client may already be using. This may impose limitations on how quickly images can be analysed, as some AI models require a lot of computational power from the processor.
Some AI models may take tens of seconds to analyse a few hundred images, others minutes. The results of image analysis are automatically stored in a database so that they are available for comparison when retrieving images using text queries.
The analyse results of the different AI models are not compatible with each other, but the user interface has been designed in a way that it provides clear indicators of which AI model has been used for which image containers.
After the first image analyses, the images can already be searched using text queries and if more images are added to image containers later, a clear indicator of the missing image analyses is presented, so one could know which image container the image analysis should be applied to. Background image analysis would also be an implementation option, but it has its risks, e.g. in terms of excessive server load.
Preliminary experiments show that the search results are pleasantly reliable, e.g. when searching for "city streets", the search results do match the query. The analysis results can be deleted per image container and per AI by Ctrl-clicking on the name of the AI model.
The results of the image analyses are stored in the same database management system that is already used for storing data such as writing related data, i.e. MariaDB. Newer versions of the database have the possibility of using vector data, which is very suitable for the purpose. In the past, the use of a separate vector database (e.g. Qdrant, Pinecone or Weavite) had been considered.
To be comtemplated later is to consider, if analysing of images could/should be done in a separate service with a large number of computing resources available. Currently, it is possibly to increase performance by directing computationally intensive operations away from a virtual server and toward e.g. a dedicated server. However, it seems that a single virtual server is rather sufficient to momentarily utilize an AI model without e.g. running out of memory on the server. Admittedly, currently AI-based image analysis uses images in their 360 pixels wide version. When using much larger ones, all the resources on a server would run out of quite quickly.
The results of AI image analysis are not stored in project backups, so if a project and its image catalogs are deleted, the related image analysis results would also be gone.
The importing, item moving and particular browsing views bring up a cropping tool, which allows you to (locally) edit images uploaded to the server by cropping out the unnecessary areas. The actual scaling of images to different sizes is done on the server. The image cropping tool can be accessed by holding down the Ctrl key when drag'n'dropping images to the upload dropzone (the bordered area which has label "Upload") or when changing a single image. That will open a modal window which has two different Upload buttons, one of which instructs server to use medium size (640 pixels wide) as the maximum image size for the selected images. Images are uploaded to the server using a progress indicator.
When uploaded new images or when updating an existing one, pressing the Shift key at the beginning of the upload gives a signal that you wish to get original images scaled to highres versions, also, i.e. 1600 pixels wide (if the original image is at least that wide).
The 1600 pixels is the maximum width at which an image can be viewed in a public work, but in reality the transferred image has also been scaled to a slightly larger size when possible, just in case there was not enough interest to crop the images at the time of transfer. Later cropping of an image can be enabled by Ctrl-clicking on any image in the image list, which opens a modal window with only image re-cropping and saving functions. As in the Large Preview modal window, the left/right arrows can be used to select the next/previous image. This functionality is available in the 'Browse specific types' view, as well as in the 'Experimental' view, which is a kind of combination of the 'Browse specific types' and 'Edit text' views. In the "Particular browsing" view it also possible to create new images based on the area marked with cropping tool, as well as an option to scale the cropped image up by one size. These modal windows can be opened to show images without possibly beeing scaled smaller, if the browser window size allows that and while opening modal window Ctrl was pressed.
For some images such as tall screenshots that contain unneeded parts one might want to use vertical cropping. For images in the particular browsing view the Split function is available in the image-spesific action tools. Clicking on it opens a modal window showing the selected image and an area next to it with a horizontal line at the point indicated by the mouse pointer. Cut points can be added by clicking and they can be removed by Ctrl-clicking on them. If the Split button was clicked while holding down the Ctrl key, the modal window would have opened in such a way that it would not try to fit the entire image into view, but would show the image in its actual size.
The Split function creates new particular-type images in the image format set in the user settings. These images also include parts that may not be needed, but which can be combined as desired using the Join function. As in the image assorting view, one can select images for use by clicking on them while holding down the Ctrl and Alt keys. When at least two images are selected, a Join button appears in the action tools. Clicking this button creates a new image in which the selected images are combined one below the other. Unnecessary images can be deleted.