Automatic Transfer - Development Log #326

Fabian explains how the new transfer of materials feature works, Nick announces LG-913e's new name, and Michi describes last night's outage.

Avatar Fabian

Fabian (Counterpoint)

I continued to work on quality-of-life improvements for the upcoming maintenance release last week. The major feature of the week was the new MTRA command which will allow you to transfer a specific amount of a material between inventories. You'll be able to open it via drag-and-drop to a new "AMT" field, so that it comes with the source and target store (as well as the material you dragged) pre-filled. On top of that however, you can also open it without any parameters and just specify the full transfer from this command alone, so you're not forced to use drag-and-drop anymore at all.

I also started to work on a small new PRO feature: The flight controls will get an "unload on arrival" checkmark which, if toggled, will make your ship try to put everything in its storage into a base storage (or warehouse if no base is present) when it finishes its flight. This should for example come in very handy when you're hauling consumables for your workers from a CX back to your base!

Avatar Nick

Nick

I'm back from vacation and ready to get back to all things PrUn! While I was away, we did have an influencer video come out by Chiches, and it looks like we got a fair amount of Spanish-speaking players from the ad. Bienvenido a todos!

I also want to announce the new name for LG-913e: Planet McPlanetface! The name is quite hilarious and has been mentioned a few times in other "Name THAT Planet" rounds. We also want to state that for this version of the universe, we are going to accept the name even though it sort of breaks the realistic theme of Prosperous Universe. In the final version of the universe, we will only accept "proper" names for planets. If you need further clarification on what that means, feel free to drop me a message on Discord :)

Avatar Michi

Michi (molp)

Last night we had one of the largest outages so far. Starting at around 0:00 GMT the game went down. Unfortunately the whole team was sound asleep at this time. When I got to my desk on Monday morning (of course it was a Monday :)), I was greeted with several messages that APEX wasn't working anymore. A quick glance into the logs showed that the usual culprit, the server itself, wasn't the problem. It was our database. The server tried to write data to the database, but the database declined, stating that not enough nodes are available to achieve a write quorum. To understand that, one has to know that our data store consists of three individual nodes, and they all have to agree when writes occur. Since we have only three nodes, if one goes down, writing to the database is not possible anymore. Right now, three nodes is plenty, but once the player base grows larger we can add more nodes, which adds resilience. Querying data on the other hand, only requires a quorum of one.

The underlying issue was simple: One node ran out of disk space and simply couldn't write any more data. Martin and I were able to fix that rather quickly and increase the amount of disk space available to the three database nodes. After a little cleaning up, everything is almost (of course there is one issue left that needs our attention) back to normal.

While the immediate cause was found quickly, the question remains why we ran out of disk space. I was just checking the available disk space last week, during my work on the snapshots, and there was plenty left. I am now looking into why exactly so much disk space was used. Right now, I am pretty sure it is not actual persisted data, but rather temporary files the database creates.

As always: we'd love to hear what you think: join us on Discord or the forums!

Happy trading!

We use cookies to personalize content and analyze access to our website. You can choose whether you only accept cookies that are necessary for the functioning of the website or whether you also want to allow tracking cookies. For more information, please refer to our privacy policy.