mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

14K
active users

#raid6

0 posts0 participants0 posts today

question for tech-y storage people: I just nabbed 4 extra (used) disks with a 6 TB capacity from a place that re-sells electronics (oregonrecycles.com); I think they did some testing but obviously I dunno what to extent, and also, if it was just limited to stuff like SMART data, well...

anyway, I wanna RAID it, which is fine, but I don't know how much lifetime these have left on them. For four disks with unknown usage, should I use RAID6 or RAID10? It's not
job critical data I'll be storing on these (mostly media and such). They'll eventually be migrated into a larger RAID array but that won't happen until I'm stable and can afford to rebuild my server so this is fine for now.

I wouldn't
mind the better read/write performance that comes with RAID10 even though it has less parity. I suspect these disks were all used together so they might have similar wear/tear patterns; in that case, I'm wondering if RAID6's double parity actually buys me any extra life? Like, given 4 disks with the same history and a probably known disk failure rate, I'm not really clear as to whether double parity is going to make much of a difference (and that if one goes down, the others probably aren't too far behind).

#techPosting #raid #raid6 #raid10 #storage #nas

I was really worried about why the RAID drives on our new #Linux #server were so noisy. A quick "write" noise every second, like a heartbeat.

Some investigation revealed that a "journal" service seemed to be writing ~512k of data every second, and only when I had that exact amount did my googling/ducking generate a useful result:

When I set up the #raid6 stuff, I did a "lazy" ext4 formatting, so the OS keeps doing that extremely slow and noisy process in the background.

Non-lazy reformat, go!

Anyone have experience deploying #btrfs in #raid5 / #raid6 configurations? I'm aware of the mdadm raid5/6+lv+btrfs approach Synology uses, and I've got a UPS and read-heavy workload so I don't really care about the native raid5/6 write hole but from everything I'm reading the "off the shelf" industry solution appears to be #zfs in raidz/raid2z

Irgendwas ist im letztem Jahr mit #Btrfs passiert. Die letzten Tage erfolgreich ein #RAID6 mit 8 Devices repariert bei dem einew total ausgefallen ist, eines immer wieder am sterben war und wärend der Reparatur ein weiteres gestorben ist.
Das hätte das FS vor mehr als einem Jahr noch Nicht im Ansatz überlebt.

Seit heute werden die Daten von leipzig.town, gemäß der Empfehlung von Mastodon, auf einem neuen Storage-Server gesichert. Auf der Maschine werden jede Nacht die derzeit 1 GB große PostgreSQL-Datenbank, die Datei .env.production, 27 GB Benutzerdateien und ein Dump der Redis-Datenbank in einem RAID6-Festplattenverbund abgelegt.

#backup #benutzer #daten #datenbank #instanz #mastodon #postgresql #raid6 #redis #secrets #server #storage

https://leipzigtown.blog/neuer-backup-server/

Mastodon, gehostet auf leipzig.townleipzig.townHier sind alle Willkommen, die Leipzig ihr zu Hause nennen und/oder die Stadt lieben. Bitte seid exzellent zueinander und haltet euch an die Regeln. Foto von F. Heiberger, Pixabay.

@nuron @ij
Ja gerade bei den Stichwörtern "unterschiedlicher Größen" und "flexiblen Verbund" ist #btrfs ein sehr gute und zu Empfehlende Wahl.
Aktuell sollte man halt noch nicht #raid5 und #raid6 nehmen.
Man kann #btrfs aber erst mal mit #raid1c4 gründen und später problemlos umconvertieren.
Aktuell habe ich bspw. ein #btrfs mit 60 HDD von 250G-16T im einsatz, das mal als #raid1c4 gestartet hat und aktuell als #raid1c3 läuft.
Sobald #raid6 stabil ist, convertiere ich das dann online um.

1/x

#100DaysOfHomeLab #Day4of100

So another update from me, because sometimes things go slow. I've booted a #NixOS live image and the #badblocks program is running a check of my four 4TB drives (been running for 86 hrs and done 84% I think the last time I checked).

I did change my decision to use #NILFS2 & #RAID6. After deliberate consideration I've switched my choice to #ZFS with #RAID10.
— no longer NILFS2 because it doesn't have compression and ZFS even includes the kitchen sink
- 1/2

#100DaysOfHomeLab #Day3of100

So after more reading tonight for my *BACKUP* server (that will be its main purpose) I have decided to:
— create a 4 disk #RAID6 array with #mdadm
— have it use #NILFS2 as its file system (with continual #checkpoints and manual #snapshots)
— use @nixos_org as OS

I understand it have a performance hit, from both these decisions, but this will be the biggest part in my backup strategy.
TBD:
— filesystem for OS on SSD :thaenkin: