Md1000 In Or Out

Who are we?We are digital librarians. Among us are represented the various reasons to keep data - legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. Government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time tm). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.We are one.

  1. Md1000 In Or Outdoors

We are legion. And we're trying really hard not to forget.- from.Links!!.Rule(s).Be excellent to each other.isn't.No memes or.No unapproved sale threads.Related Subreddits. So i have a Dell R720, with an H200 external SAS card, connected to a MD1000 DAS shelf. SAS drives can have 2 connections at once, such as 2 different servers talking to the same drive. That's why there's 2 interface modules in the MD1000. If you have SAS drives, you can connect the MD1000 to a pair of servers and both access it.SATA does not have that ability, so you can only use 1 server. The interposer boards give that dual-connection ability to a SATA disk.

That's basically the only thing they do.That's also what's going on with split mode. In that mode, one controller is talking to half the drives on 'channel 1' while the other controller is talking to the other half of the drives on 'channel 2' and there's no way to change that. Split mode (and the dual controllers in the MD1000) is useless unless you have SAS disks or interposers. Random question (more regarding the MD1200 but probably the same for MD1000), how do you actually have two servers talking to the same drive? You're explaining split mode, but as you said and the manual also states, split mode splits the enclosure up in two (each with 6 disks). In unified mode you are supposed to plug them both into the same server, or can you plug them into another one? But wouldn't the result be the same as with split mode?Another random question, if you have it in unified mode with redundancy (two cables), does it provide 4 additional lanes (8 total) for the disks, or still only 4?.

Md1000

I believe unified with 2 cables doesn't accomplish anything for SATA drives. That again uses the dual-channel SAS feature.If you have SAS drives, in unified mode, you can connect server 1 to controller 1 and server 2 to controller 2, and both servers can access ALL the drives. Or you can use split mode, and each server sees half of them. Or you can connect a single server to both, preferably via 2 separate HBA cards, and then you have redundancy from a card/cable failure.

Md1000 In Or Out

I don't know if there's any performance gains from doing that. Yeh i realise this, but theres certainly something else happening. Each channel consists of four 3Gbit SAS links, which should give a total throughput of 12Gbit, or around 1500MB/sec. Plenty enough for a box with 15 spinning drives inside. With SAS and split mode, you can run both cables and end up with 8 links, spread across the thing for double the bandwidth. Your explanation makes sense as to why split mode is behaving the way it is. But it doesnt explain whats going on otherwise.

What seems to be happening, is instead of using four links, its only using ONE of those links, giving a theoretical max of 375MB/secThe sas discovery utility shows all four links are up, at 3gbit. So the cable and controller are happy.I've found a few mentions on old threads floating around the internet that suggests that with the MD1000, you can either use SATA drives directly without the interposer, but you end up stuck with one lane. OR you can use the interposers, but then your stuck with a 2TB limit due to the SAS1 hardware. Ofcourse this is all a completely unsupported config, so theres no official line from Dell about it.I've got dd running now doing a zero fill, and with all 8 drives running, i'm not even managing 50MB/sec out of the drives. If i stop everything and just run one disk, then i get 130MB/sec.

I can get two drives running at 130MB/sec, but any more and they all start slowing down. Sorry, perhaps you missunderstand. The drives were managing 40MB/sec each, for a total in the 300's.But for a straight forwards zero fill like this, they should be doing well over 100MB/sec each and totalling 800MB/sec.As a test i moved two drives to the R720s internal backplane, and then run the whole thing again. The drives on the backplane are managing over 150MB/sec. The Drives on the MD1000 have sped up slightly as theres only 6 of them now, managing 55MB/sec each, so still totalling the same total bandwidth in the 300MB/sec range.I guess i need to see if the internal Backplane acts the same way when dealing with multiple drives at once.

Managed to do some more testing with the MD1000, doing reads this time. I started dd-ing each drive to /dev/null. Started with one drive, then 2 then 3 then 4. I didnt go any further than 4, as its clear what was going on at that point.As you can see, one drive manages 130MB/sec, two drives manages 260MB/sec. After that it doesnt go any faster. Adding more transfers just slows down the existing transfers and the total transfer rate sits at 260MB/sec.​I then tried the same thing with the drives on the internal backplane.

Md1000 In Or Outdoors

As you can see, it scales completely linearly to waay over 1000MB/sec​Also interesting to note that the 8TB drives on the backplane manage 220MB/sec each, almost twice as fast as the same drives on the MD1000. And continue at that speed even with lots of other transfers running.

The rest of the drives on the backplane are older and slower (4TB i think) which likely explains their lower speeds.The question now is, is this a limitation of the MD1000, and will it be fixed simply by replacing it with an MD1200, or is there something else going on, and spending a lot of money on the new shelf wont actually fix it. Per the manual, each EMM (the management modules) has one connection going to the HBA in the server and one of those connections is to connect a further MD1000 shelf.​The purpose of two EMMs is either (with SAS) to do high availability (should a controller or an EMM fail or a cable fail) or (with SAS or SCSI) to split the shelf between two host systems.​At best, you can split the MD1000 between two ports or two controllers in your server. Sure, but each of those 'one' connections contain four lanes, each capable of 3gbps, for a total bandwidth of 12Gbps per cable. If it was doing 12Gbps it would be fine. The server shows the DAS is connected and all four lanes are up. The problem seems to be the unit is only using one lane to actually access the drives.

Outdoors

Essentially a single 3Gbps link for 12 drives.From what i can figure out, this is simply how this old thing works when dealing with SATA drives without interposers. If i use interposers it will (seemingly) start using the lanes properly, but the interposers themselves limit the connected drives to 2TB.As mentioned above, split controller mode doesnt work without the interposers, as the secondary controller tries to talk to the drives on the secondary SAS channel, which doesnt exist on a SATA drive, and thus the drives arent detected.Its a backup server, so outright performance isnt critical.

The main thing where i was seeing oddness before adding these new drives was doing ZFS rebuilds/scrubs. The rebuild/scrub starts off flying at over 1GB/sec but after a while slows right down.Having seen how it acts when simply reading from the drives, that now makes sense. Originally i had 8 drives in the servers internal backplane, and only 6 drives in the MD1000. With the additional drives going in, i've doubled the number of disks on the MD1000 and as a result made the problem significantly worse. And ofcourse, you want raid rebuilds to happen as quickly as possible.

Comments are closed.