Write endurance with solid state drives varies. SLC drives have higher endurance but newer generation MLC (and eMLC) drives are getting better.
As an example, the MLC Intel 320 drives specify endurance of 20GB/day of writes for five years. If you are doing small or medium size random reads and writes this is sufficient. The Intel 710 series is the enterprise-class models and have higher endurance.
If you intend to write a full drive’s worth of data writing per day (and every day for a long time), this level of endurance would be insufficient. For large sequential operations (for example very large map/reduces), one could write far more than 20GB/day. Traditional hard drives are quite good at sequential I/O and thus may be better for that use case.
Reserve Some Unpartitioned Space¶
Some users report good results when leaving 20% of their drives completely unpartitioned. In this situation the drive knows it can use that space as working space. Note formatted but empty space may or may not be available to the drive depending on TRIM support which is often lacking.
On some devices, smartctl -A will show you the Media_Wearout_Indicator.
sudo smartctl -A /dev/sda | grep Wearout 233 Media_Wearout_Indicator 0x0032 099 099 000 Old_age Always - 0
A paper in ACM Transactions on Storage (September 2010) listed the following results for measured 4KB peak random direct IO for some popular devices:
|Device||Read IOPS||Write IOPS|
Intel’s larger drives seem to have higher write IOPS than the smaller ones (up to 23,000 claimed for the 320 series).
Real-world results should be lower, but the numbers are still impressive.
Some manufacturers specify reliability stats indicating failure rates of approximately 0.6% per year. This is better than traditional drives (2% per year failure rate or higher), but still quite high and thus mirroring will be important. (And of course manufacture specs could be optimistic.)
Random Reads vs. Random Writes¶
Random access I/O is the sweet spot for SSD. Historically random reads on SSD drives have been much faster than random writes. That said, random writes are still an order of magnitude faster than spinning disks.
Recently new drives have released that have much higher random write performance. For example the Intel 320 series, particular the larger capacity drives, has much higher random write performance than the older Intel X25 series drives.
PCI vs. SATA¶
SSD is available both as PCI cards and SATA drives. PCI is oriented towards the high end of products on the market.
Some SATA SSD drives now support 6Gbps sata transfer rates, yet at the time of this writing many controllers shipped with servers are 3Gbps. For random IO oriented applications this is likely sufficient, but worth considering regardless.
RAM vs. SSD¶
Even though SSDs are fast, RAM is still faster. Thus for the highest performance possible, having enough RAM to contain the working set of data from the database is optimal. However, it is common to have a request rate that is easily met by the speed of random IO’s with SSDs, and SSD cost per byte is lower than RAM (and persistent too).
A system with less RAM and SSDs will likely outperform a system with more RAM and spinning disks. For example a system with SSD drives and 64GB RAM will often outperform a system with 128GB RAM and spinning disks. (Results will vary by use case of course.)
One helpful characteristic of SSDs is they can facilitate fast “preheat” of RAM on a hardware restart. On a restart a system’s RAM file system cache must be repopulated. On a box with 64GB RAM or more, this can take a considerable amount of time – for example six minutes at 100MB/sec, and much longer when the requests are random IO to spinning disks.
FlashCache is a write back block cache for Linux. It was created by Facebook. Installation is a bit of work as you have to build and install a kernel module. Sep2011: If you use this please report results in the mongo forum as it’s new and everyone will be curious how well it works.
One user reports good results with the noop IO scheduler under certain configurations of their system. As always caution is recommended on nonstandard configurations as such configurations never get as much testing...
mongoperf is a disk performance stress utility. It is not part of the MongoDB database but simply a disk exercising program. Test your SSD setup with mongoperf.
The random writes are a worst case scenario, and in many cases MongoDB can do writes that are much larger.