Marvell Sata Raid Driver For Mac
Curiosity got the better of me High-performance RAID storage controllers with features and flexibility that systems designers marvell sata. Read Also: Pass4sure 642-902 download Driver Downloads Download the latest Marvell drivers for your specific device or application.
Hello all, I've been on a real roller coaster ride getting a large virtual host up and running. One troublesome thing I've discovered (the hard way) is that the drivers for Marvell SAS/SATA chips still have a few problems.
After Googling around quite a bit, I see a significant number of others have had similar issues, especially evident in the Ubuntu forums but also for a few RHEL/CentOS users. I have found that under heavy load (in my case, simply doing the initial sync of large RAID-6 arrays) the current 0.8 driver can wander off into the weeds after a while, less so for the older 0.5 driver in CentOS-5. It would appear that some sort of bug has been introduced into the newer driver. I've had to replace the Marvell-based controllers with LSI, which seem rock solid. I'm rather disappointed that I've wasted good money on several Marvell-based controller cards (2 SAS/SATA and 2 SATA).
Is anyone aware of the.real. status of these drivers?
The Internet is full of somewhat conflicting reports. I'm referring to 'mvsas' and 'sata-mv', both of which seem to have issues under heavy load. It sure would be nice to return to using what appear to be well-made controller cards. I understand that even Alan Cox has expressed some frustration with the current driver status.
FWIW, I had similar problems under the RHEL-6 evaluation OS too. On 3/3/11 6:52 PM, Chuck Munro wrote: I've been on a real roller coaster ride getting a large virtual host up and running. One troublesome thing I've discovered (the hard way) is that the drivers for Marvell SAS/SATA chips still have a few problems. After Googling around quite a bit, I see a significant number of others have had similar issues, especially evident in the Ubuntu forums but also for a few RHEL/CentOS users.
I have found that under heavy load (in my case, simply doing the initial sync of large RAID-6 arrays) the current 0.8 driver can wander off into the weeds after a while, less so for the older 0.5 driver in CentOS-5. It would appear that some sort of bug has been introduced into the newer driver.
I've had to replace the Marvell-based controllers with LSI, which seem rock solid. I'm rather disappointed that I've wasted good money on several Marvell-based controller cards (2 SAS/SATA and 2 SATA).I replaced separate SII and promise controllers with a single 8-port Marvell based card and thought it was a big improvement.
No problems with centos5.x, mostly running RAID1 pairs, one of which is frequently hot-swapped and re-synced. I hope its not going to have problems when I upgrade.
Chuck Munro Since I have the luxury of time to evaluate options, I've just downloaded Scientific Linux 6 to see what happens with either the mvsas or sata-mv driver. This is my first experience with SL but I wanted native ext4 rather than the preview version in CentOS-Plus. Even if I stick with SL-6 as the KVM host, I'll continue using CentOS as guest machines. If the Marvell drivers don't pan out, it looks like I'll have to either spend money on a 3Ware LSI Promise controller or revert to CentOS-Plus 5.5. On 3/3/11 6:52 PM, Chuck Munro wrote: I've been on a real roller coaster ride getting a large virtual host up and running. One troublesome thing I've discovered (the hard way) is that the drivers for Marvell SAS/SATA chips still have a few problems. After Googling around quite a bit, I see a significant number of others have had similar issues, especially evident in the Ubuntu forums but also for a few RHEL/CentOS users.
I have found that under heavy load (in my case, simply doing the initial sync of large RAID-6 arrays) the current 0.8 driver can wander off into the weeds after a while, less so for the older 0.5 driver in CentOS-5. It would appear that some sort of bug has been introduced into the newer driver.
I've had to replace the Marvell-based controllers with LSI, which seem rock solid. I'm rather disappointed that I've wasted good money on several Marvell-based controller cards (2 SAS/SATA and 2 SATA).I replaced separate SII and promise controllers with a single 8-port Marvell based card and thought it was a big improvement. No problems with centos5.x, mostly running RAID1 pairs, one of which is frequently hot-swapped and re-synced. I hope its not going to have problems when I upgrade.Since I have the luxury of time to evaluate options, I've just downloaded Scientific Linux 6 to see what happens with either the mvsas or sata-mv driver. This is my first experience with SL but I wanted native ext4 rather than the preview version in CentOS-Plus. Even if I stick with SL-6 as the KVM host, I'll continue using CentOS as guest machines. David sterckx 2016 youtube. If the Marvell drivers don't pan out, it looks like I'll have to either spend money on a 3Ware LSI Promise controller or revert to CentOS-Plus 5.5 for ext4.
SL-6 is installing as I write this. Chuck Munro Yes, I've had problems with Promise cards in the past, but haven't bought any for a long time. They seem to be moving upscale these days.
Regarding the Marvell drivers, I had good luck with the 'satamv' driver in Scientific Linux 6 just yesterday, running a pair of 4-port PCIe-x4 Tempo 'Sonnet' controller cards. So it appears someone has fixed that particular driver. I've decided to stick with those cards rather than re-install the Supermicro/Marvell SAS/SATA 8-port controllers, which use the. If the Marvell drivers don't pan out, it looks like I'll have to either spend money on a 3Ware LSI Promise controllerThe 3ware are excellent.And Promise, historically, is.not. Yes, I've had problems with Promise cards in the past, but haven't bought any for a long time. They seem to be moving upscale these days. Regarding the Marvell drivers, I had good luck with the 'satamv' driver in Scientific Linux 6 just yesterday, running a pair of 4-port PCIe-x4 Tempo 'Sonnet' controller cards.
So it appears someone has fixed that particular driver. I've decided to stick with those cards rather than re-install the Supermicro/Marvell SAS/SATA 8-port controllers, which use the 'mvsas' driver that I had problems with on the RHEL-6 evaluation distro. So far, SL-6 has performed very well, all RAID-6 arrays re-synced properly, and running concurrent forced fscks on eight arrays was very fast (because the ext4 filesystems were still empty:-) ). I think I'll stick with SL-6 as the VM host OS, but will use CentOS for the guest VMs. CentOS-5.x will do fine for now, and I'll have the luxury of upgrading guest OSs to CentOS-6 as the opportunity arises.
Chuck Munro Yes, those are the cards which target Windows and OS-X, but they work fine on Linux as well. They use the Marvell 88SX series chips. They control 6 2TB WD Caviar Black drives, arranged as 5 drives in a RAID-6 array with one hot spare.
3 drives are connected to each of two cards. Mdstat shows array re-sync speed is usually over 100 MBytes/sec although that tends to vary quite a bit over time.
The Supermicro mobo I'm using (X8DAL-3) has an on-board LSI 1068E SAS/SATA controller chip, although I. Regarding the Marvell drivers, I had good luck with the 'satamv' driver in Scientific Linux 6 just yesterday, running a pair of 4-port PCIe-x4 Tempo 'Sonnet' controller cards.Are those the Mac/Windows Sonnet cards that go for less than $200? What kind of performance you seeing? Are you doing software raid on them?Yes, those are the cards which target Windows and OS-X, but they work fine on Linux as well. They use the Marvell 88SX series chips.
They control 6 2TB WD Caviar Black drives, arranged as 5 drives in a RAID-6 array with one hot spare. 3 drives are connected to each of two cards. Mdstat shows array re-sync speed is usually over 100 MBytes/sec although that tends to vary quite a bit over time. On 03/05/11 7:01 AM, Eero Volotinen wrote: areca works. For SAS, I prefer LSI Logic.The Supermicro mobo I'm using (X8DAL-3) has an on-board LSI 1068E SAS/SATA controller chip, although I have the RAID functionality disabled so I can use it as a bunch of drives for software RAID-6. Like the Tempo cards, it has 6 2TB WD SATA drives attached which provides a second set of arrays.
Performance really sucks, for some unknown reason, and I get lots of I/O error messages logged when the drives get busy. There appears to be no data corruption, just a lot of retries that slow things down significantly. The LSI web site has no info about the errors. The firmware is passing back I/O abort code 0403 and LSI Debug info related to 'channel 0 id 9'.
There are only 8 ports so I don't know which disk drive may or may not be causing problems. The SMART data on all disks shows no issues, although I tend to treat some SMART data with scepticism. I need to track this error down because my understanding is that the LSI controller chip has very good performance.
Nico Kadel-Garcia I've had Linux integration issues with them for various reasons. Also, one LSI chipset may differ, a.LOT., from the next LSI chipset in performance and integration. I like Adaptec for price/performance, and good Linux overall compatibility (including CentOS). Just don't order those 'fell off the truck' Taiwan specials that are clearly Adaptec chipsets, but have actually had the numbers filed off. (Ran into those at a hardware vendor that specialized in promising BIG!
On 09:00 AM, compdoc wrote: Regarding the Marvell drivers, I had good luck with the 'satamv' driver in Scientific Linux 6 just yesterday, running a pair of 4-port PCIe-x4 Tempo 'Sonnet' controller cards.Are those the Mac/Windows Sonnet cards that go for less than $200? What kind of performance you seeing? Are you doing software raid on them?Yes, those are the cards which target Windows and OS-X, but they work fine on Linux as well.?They use the Marvell 88SX series chips. They control 6 2TB WD Caviar Black drives, arranged as 5 drives in a RAID-6 array with one hot spare.?3 drives are connected to each of two cards.?mdstat shows array re-sync speed is usually over 100 MBytes/sec although that tends to vary quite a bit over time.
On 03/05/11 7:01 AM, Eero Volotinen wrote:?areca works. For SAS, I prefer LSI Logic.The Supermicro mobo I'm using (X8DAL-3) has an on-board LSI 1068E SAS/SATA controller chip, although I have the RAID functionality disabled so I can use it as a bunch of drives for software RAID-6.?Like the Tempo cards, it has 6 2TB WD SATA drives attached which provides a second set of arrays. Performance really sucks, for some unknown reason, and I get lots of I/O error messages logged when the drives get busy.?There appears to be no data corruption, just a lot of retries that slow things down significantly. The LSI web site has no info about the errors.?The firmware is passing back I/O abort code 0403 and LSI Debug info related to 'channel 0 id 9'.?There are only 8 ports so I don't know which disk drive may or may not be causing problems.?The SMART data on all disks shows no issues, although I tend to treat some SMART data with scepticism. I need to track this error down because my understanding is that the LSI controller chip has very good performance.I've had Linux integration issues with them for various reasons. Also, one LSI chipset may differ, a.LOT., from the next LSI chipset in performance and integration. I like Adaptec for price/performance, and good Linux overall compatibility (including CentOS).
Just don't order those 'fell off the truck' Taiwan specials that are clearly Adaptec chipsets, but have actually had the numbers filed off. (Ran into those at a hardware vendor that specialized in promising BIG! But which had never tested the components in combination, and explaining that they needed to files 2 millimeters off the overlong and badly cut mounting plates or the controller cards would.keep.
unseating was. Not a good conversation.) Nico Kadel-Garcia. Charles Polisher Adaptec is proud of their 'HostRAID' technology that has a spotty record with Linux compatibility. The manufacturer's descriptions have led people to mistakenly think they bought a hardware RAID card when in fact the RAID functions are implemented in software.
This approach has been dubbed 'fake RAID'. It's not clear to me that this is a win compared to using the kernel's software RAID features. Tells the sorry tale of a company whose products used. Nico Kadel-Garcia wrote: I like Adaptec for price/performance, and good Linux overall compatibility (including CentOS). Just don't order those 'fell off the truck' Taiwan specials that are clearly Adaptec chipsets, but have actually had the numbers filed off.Adaptec is proud of their 'HostRAID' technology that has a spotty record with Linux compatibility. The manufacturer's descriptions have led people to mistakenly think they bought a hardware RAID card when in fact the RAID functions are implemented in software.
This approach has been dubbed 'fake RAID'. It's not clear to me that this is a win compared to using the kernel's software RAID features. Tells the sorry tale of a company whose products used to be a safe bet. The comments tend to confirm the sad state of affairs.
Covers fake RAID. Chuck Munro Here's the latest info which I'll share. It's good news, thankfully. The problem with terrible performance on the LSI controller was traced to a flaky disk. It turns out that if you examine 'dmesg' carefully you'll find a mapping of the controller's PHY to the 'id X' string (thanks to an IT friend for that tip). The LSI error messages have dropped from several thousand/day to maybe 4 or 5/day when stressed.
Now the LSI controller is busy re-syncing the arrays with speed consistently over. Covers fake RAID.Ouch. That was.precisely. why I used the 2410, not the 1420, SATA card, some years back. It was nominally more expensive but well worth the reliability and support, which was very good for RHEL and CentOS.
Marvell Sata Raid Driver For Mac Mac
I hadn't been thinking about that HostRaid messiness because I read the reviews and avoided it early.Here's the latest info which I'll share. It's good news, thankfully. The problem with terrible performance on the LSI controller was traced to a flaky disk.
It turns out that if you examine 'dmesg' carefully you'll find a mapping of the controller's PHY to the 'id X' string (thanks to an IT friend for that tip). The LSI error messages have dropped from several thousand/day to maybe 4 or 5/day when stressed. Now the LSI controller is busy re-syncing the arrays with speed consistently over 100,000K/sec, which is excellent. My scepticism regarding SMART data continues. The flaky drive showed no errors, and a full test and full zero-write using the WD diagnostics revealed no errors either. If the drive is bad, there's no evidence that would cause WD to issue an RMA.
Regarding 'fake raid' controllers, I use them in several small machines, but only as JBOD with software RAID. I haven't used Adaptec cards for many years, mostly because their SCSI controllers back in the early days were junk. Using RAID for protecting the root/boot drives requires one bit of extra work. Make sure you install grub in the boot sector of at least two drives so you can boot from an alternate if necessary. CentOS/SL/RHEL doesn't do that for you, it only puts grub in the boot sector of the first drive in an array. I blame Adaptec for the dominance of IDE.?Seriously.
If Adaptec A) hadn't had the lionshare of the SCSI mindset in the PC business back in the 90s, and B) hadn't made so much overpriced buggy crap, we'd all be using SCSI today.Yes and No. I remember playing with it back in the 90's and what drove me away from SCSI was the complexity of the standard. Yes Adaptec made it harder then it had to be but IDE, for all it's failings, was easier to use. You jumper'd one disk as master and one as slave and it pretty much just worked. SCSI on the other hand, at least in DOS/Win3/Win95/98, was a complex process involving TSR's and fiddling with jumpers on the disks & HBA. I remember my father spent six hours trying to get a simple SCSI scanner to work.
By the time RedHat 6 came out, when I made my first real foray into Linux, SCSI support was a lot better. I also took the time to sit down with a sysadmin I knew and download his knowledge about SCSI which he'd learned over the decades. On Tue, 8 Mar 2011, Drew wrote: I blame Adaptec for the dominance of IDE.?Seriously. If Adaptec A) hadn't had the lionshare of the SCSI mindset in the PC business back in the 90s, and B) hadn't made so much overpriced buggy crap, we'd all be using SCSI today.Yes and No. I remember playing with it back in the 90's and what drove me away from SCSI was the complexity of the standard. Yes Adaptec made it harder then it had to be but IDE, for all it's failings, was easier to use.
You jumper'd one disk as master and one as slave and it pretty much just worked. SCSI on the other hand, at least in DOS/Win3/Win95/98, was a complex process involving TSR's and fiddling with jumpers on the disks & HBA. I remember my father spent six hours trying to get a simple SCSI scanner to work.I loved the mid-90s saying.
SCSI is like voodoo: it all depends on where you stick the pins. Drew wrote: I blame Adaptec for the dominance of IDE.??Seriously. If Adaptec A) hadn't had the lionshare of the SCSI mindset in the PC business back in the 90s, and B) hadn't made so much overpriced buggy crap, we'd all be using SCSI today.Yes and No.
I remember playing with it back in the 90's and what drove me away from SCSI was the complexity of the standard. Yes Adaptec made it harder then it had to be but IDE, for all it's failings, was easier to use. You jumper'd one disk as master and one as slave and it pretty much just worked. SCSI on the other hand, at least in DOS/Win3/Win95/98, was a complex process involving TSR's and fiddling with jumpers on the disks & HBA. I remember my father spent six hours trying to get a simple SCSI scanner to work. I know it didn't take me very long (once I'd gotten a used SIIG SCSI card from a co-worker) to get my SCSI scanner up and running under Win95 (and I still have both the card and the scanner.) mark. Compdoc I've been having a rash of drive failures recently and I have come to trust SMART.
One thing's for sure - SMART is not implemented the same on all drives or controllers. Recently one older Seagate drive showed no SMART capability in linux using the gnome-disk-utility, but I could read the SMART data from the drive in Windows with HD Tune. It isn't infallible, but SMART is certainly one tool you can use in the diagnosis.
I wouldn't ignore Reallocated Sector counts or Current Pending Sector. My scepticism regarding SMART data continues. The flaky drive showed no errors, and a full test and full zero-write using the WD diagnostics revealed no errors either.
If the drive is bad, there's no evidence that would cause WD to issue an RMA. I've been having a rash of drive failures recently and I have come to trust SMART. One thing's for sure - SMART is not implemented the same on all drives or controllers.
Marvell Raid Driver Windows 10
Recently one older Seagate drive showed no SMART capability in linux using the gnome-disk-utility, but I could read the SMART data from the drive in Windows with HD Tune. It isn't infallible, but SMART is certainly one tool you can use in the diagnosis. I wouldn't ignore Reallocated Sector counts or Current Pending Sector counts, for instance. Working for a customer this weekend, I replaced an older 60G WD drive that I knew for months to have bad sectors, but the Reallocated Sector Count was still 0. After a scan for errors with HD Tune, the Current Pending sector count showed 13, but the Reallocated Sector Count never grew. There is still a lot for me to learn - like the relationship between SMART within the drive and the controller's support of SMART. You would think they are independent of each other, but I wonder.