So as you may have known, a few weeks ago Soleil Levant went offline. Its hoster, mrlerien, have grew enough of managing a used, dying server, for almost half a year.
So we have decided to shut down soleil levant forever.
Some Project Segfault services have been impacted by this: Notably Synapse, our Matrix homeserver, that have been scheduled for a permanent shutdown on 7th October 2023. Arya will try to keep it up for atleast a week post this, but no promises since its running on his laptop :).
This was done before soleil shut down but its worth noting our Pubnix now has a quota of 10G instead of 20G. This was because the pubnix was growing too big and we can't sustainably give 20G to every user anymore, especially so after the IN Node migration.
As for the rest, everything is still going fine, with the data having been moved from soleil to Arya's servers in India.
This was done to keep most of the services up. While we are going to do our best to keep them running stable enough, the current hardware Arya has doesn't permit us to go beyond what we used to host without the server becoming instable (trust me we tried, consumer hardware cant be pushed far unfortunately).
Project Segfault will still be here, we are still maintaining the rest of the services, but these services will shutdown due to resource constraints. The majority of those have seen minimal user usage and have not been used that much or have been shortly created. Which should minimize the damage. :
- Matrix - See below for whats happening to it
- Libre Translate - Wasn't used much, and uses a LOT of CPU
- Akkoma - Was basically only used by us admins, and it took up too much CPU for that
- Plausible - Uses too much CPU, and do we really need these analytics?
This should allow IN and IN2 (the computer that used to be used to host IN Node before the Acer was comissioned) to have more room for other services to operate properly.
Matrix is our most intensive service, even surpassing the Pubnix.
Our original plan was to close registrations on Matrix and continue running it on IN02, but at the end we realized our other services like piped and invidious took up too much to be able to do that.
Going from 320GB to 16 GB of RAM has a toll doesn't it :)
Due to these reasons, we decided to discontinue matrix, but we didn't have any server capable of hosting it.
Lerien wanted to build another rig for some of his personal use in a bit and we were planning on using that for users to get their data out but we couldn't let you guys waiting for your data for so long (and we cant for our data either :P).
Due to this, we decided to just host it on Arya's laptop to let users take out their data (its docked like basically all the time so its stable enough).
Specs of it are as follows:
- Thinkpad E14 G5
- AMD Ryzen 5 7530U
- 12G DDR4 RAM
- 512 GB Kingston NVMe SSD (256GB allocated for matrix DB) and 1TB Sony HDD (for media files)
But what about the "measures" did you take to keep soleil up?
We bought 2 fairly new hard drives for the raid array should one of the drives fail again. We had that in plan till we realised the hardware raid controller was artificially limited, preventing us from rebuilding the array with the new disk.
You could say it was a waste of money, and we should've checked if the raid was supporting it at all.
Unfortunately, flashing the raid with IT Firmware (which allows the RAID Card to work as a plain HBA) was not an option as that would have bricked it (the specific hardware raid card we had did not support flashing it). That was another reason to shut down soleil. If we have kept soleil longer, more and more problems would have occured and more headaches induced.
Information on the new setup
We now have two servers, IN01 and IN02.
The specs are as follows:
- Acer Aspire 7 A715-75G
- i5-9300H (4c4t)
- 12G DDR4
- 512 GB Lenovo/UnionMemory NVMe SSD with encrypted ZFS for VMs
- Runs the webserver and most services, along with the Pubnix
- Macbook Pro 2017 (14,3)
- 16G DDR3
- 256 GB Apple NVMe SSD with encrypted ZFS for VMs
- Runs intensive services such as XMPP, Kbin, Invidious, Piped and Jitsi
But what about the GDPR?
We know this was pretty much a clear violation of GDPR, but we have no other choice.
Pizza1 was already overloaded, and even more after we moved the other privacy frontends back to it.
So the only option we had was to move to this.
However, there is a small silver-lining. Around the time all this was happening, the much controversial Digital Personal Data Protection Bill, 2023 passed in India.
According to the EU, data can be transferred freely to countries it deems to have adequate privacy laws, and if they recognize the DPDPB, this won't be in legal gray area, but well politics is politics and we don't know what will happen :)
Okay but what about the US Node?
So this is again us being incompetent :D
Originally, US Node was on Digital Ocean who gave only 2TB of monthly b/w. Due to this, we were almost constantly measuring the usage and such.
But after the racknerd migration which gave us 12TB of monthly b/w, we started being a bit complacent with it, along with re-enabling proxying on invidious.
The previous month, we barely got by but this month, around the 21st, Racknerd suspended our VPS since it was taking up too much b/w, and this is where the harsh realizations started to appear.
It seems like racknerd actually accounted for both in and out for the 12TB, which meant we got less than half of what we thought we did.
While we looked for solutions for this, we realized racknerd's had ONLY one option to get the server back up before the 1st of October, and that was to pay a fixed rate of 7$ which gives 1TB of extra data for every month in the yearly plan. This was really-really expensive, and its straight up dumb they dont give an option to pay based on how much you used past the 12TB limit.
Due to this, we took the decision to just let the US node be down till the 1st of October. What a great ending for the month!
A small silver-lining
Nitter is back!
Nitter EU and US are back using guest accounts and ratelimits. Thanks woodland.cafe for the help!
Our main services all run on encrypted storage now
When we had to migrate everything to a new SSD anyway, I took the oppurtunity to move the disks to encrypted zfs, just for the extra security.
What will you do in the future?
Project Segfault will keep going, but in a lot more restrictive state.
We have also gone into reduced mode since school and studies started again. So all of us have been busy with our lives instead of taking care of the project. The reduced mode means we'll just keep maintenance to a minimum.
Services are having issues may be down for a longer period of time due to our reduced activity. You are free to mail us if something was down.
Moral of the story
Don't host big things on a flimsy hardware, we have failed you all by doing so and we're sorry, this may have destroyed some of the trust our users had in us. Reviewing the possibility of failures should have been our priority.
We also shouldn't have grown too big. By opening up matrix registrations, we've pushed the hardware to its limits and should've thought about its sustainability before doing so.