Based on the TechNet article “Software limitations and limitations for SharePoint 2013”, each Microsoft SQL Server content database shouldn’t exceed 200 gigabytes (GB) in dimensions. Even when some or the majority of the data continues to be offloaded with other storage locations using exterior or Microsoft SharePoint remote BLOB storage (EBS or RBS), “the total amount of remote BLOB [binary large object] storage and metadata within the content database mustn’t exceed this limit.”
Customers frequently ask what purpose DocAve Storage Manager serves, then, if they’re still “strongly recommended” to stick to this limit.
Storage Manager externalizes unstructured SharePoint rbs data and stores it in network file shares and a multitude of other configurable devices and locations, with the aim of reducing the quantity of space for storage needed on costly SQL servers and keep the information fully available to finish users. However, this limit would indicate that 200 GB should be put aside for every content database even when 180 GB of this data, for instance, is not really kept in SQL. So how exactly does Storage Manager reduce SQL storage usage if this sounds like the situation?
You can observe that the information database limit could be elevated to 4 terabytes (TB), as long as the storage disks can offer between .25 and a pair of IOPS (input/output operations per second). If externalized content must be housed on such efficient servers – and just a comparatively little bit of data could be offloaded to individuals servers before the minimum IOPS requirement is arrived at – then so how exactly does this help customers save considerably on storage costs?
You should bear in mind these figures from Microsoft are suitable for their very own FILESTREAM provider, and aren’t always relevant towards the AvePoint DocAve RBS provider. It is because all FILESTREAM operations happen on a single server as SQL Server, while DocAve is capable of doing shifting these to other servers. Where DocAve is worried, they are more suggestions than hard needs.
“Arrange for RBS in SharePoint 2013”, that also pertains to SharePoint 2010, is yet another TechNet article that managers might find useful within their look at if you should utilize remote BLOB storage. In a nutshell, there’s two primary use cases for externalizing data:
- Capture large files during data migrations or regular finish-user file uploads
- Archive idle happy to lower, less costly tiers of storage
The very first scenario is perfect for active content, that organizations should indeed conform as carefully as you possibly can towards the low-latency, high-IOPS, and SQL database size needs outlined within the aforementioned TechNet articles. One advantage here is you can stop your database from growing much bigger, stopping database bloat. As SQL Server databases be bloated, they might require more space for storage and take care of program transactions less efficiently. Another is you can even improve read-write access speeds on individuals large files, because there are typically observable performance gains when files are moved from SQL Server and onto exterior high-tier disks. And, obviously, you are able to expand your articles databases several occasions over 200 GB.
Our white-colored paper, Optimize SharePoint Storage with BLOB Externalization, adopts greater detail concerning the important things to consider when you’re searching to maximise BLOB performance inside your SharePoint atmosphere. The overall guideline is: “Our testing has proven that BLOBs more than 1 MB generally perform better when externalized – presuming the BLOB store itself performs well – whereas really small files smaller sized than 256 KB generally perform better when kept in the information database.”
However, the 2nd scenario is definitely the one which introduces greater financial savings. With regards to idle content for example old versions or documents that haven’t been utilized inside a lengthy time, the IOPS requirement becomes significantly less important. Actually, even Microsoft’s 4 TB limit per content database no more applies, because there’s really no explicit limit for “document archive” scenarios. Which means that a lot more content could be externalized, as lengthy because the files are now being read and written infrequently. You need to therefore have the ability to store enormous levels of BLOB data on cheaper storage, potentially helping you save thousands or perhaps huge amount of money each year.
Both in scenarios, when BLOBs happen to be offloaded from SQL Server to remote storage locations, the administrator may then shrink the information databases so they return near to their original size prior to that content was submitted to SharePoint. Like a further advantage, this effectively reduces backup time, storage costs, and IOPS needs.
SQL Server is needed for a lot of SharePoint activities past the studying and writing of BLOB data. Once BLOB-related I/O continues to be removed from the equation, SQL Server sources could be reallocated with other tasks, leading to improved responsiveness for individuals other pursuits. Meanwhile, you are able to postpone the necessity to purchase new hardware to aid additional SQL servers.
Note: If you’re running SharePoint 2013 and questioning whether RBS would be helpful or compatible in age shredded storage, please refer to John Hodges’ earlier blog publish, “The Situation for Remote BLOB Storage in Microsoft SharePoint 2013”.
Additional Benefits Provided by DocAve
As pointed out in the last section, the AvePoint DocAve RBS provider is capable of doing overcoming some limitations natural in Microsoft’s FILESTREAM provider. For example, “Arrange for RBS in SharePoint 2013” mentions that FILESTREAM doesn’t support encrypting or compressing BLOB data, but DocAve Storage Manager has lengthy offered such abilities.
Understandably, once RBS continues to be enabled, your SQL atmosphere will be hard to manage. Externalized content won’t be considered the entire size in SQL Server when the databases happen to be reduced, and it’ll thus become more hard to identify which content databases are in or approaching the 200 GB best-practice limit. In DocAve 6 Service Pack (SP) 3, DocAve Storage Manager offered a brand new Storage Set of segmentation of storage inside your SharePoint atmosphere (data surviving in SQL databases versus. file servers), database consumption, each file’s date of last access, the externalization road to each BLOB, and much more. The Storage Report provides you with a obvious, comprehensive summary of simply how much SharePoint data you’ve where everything resides, simplifying the job of storage administration.
DocAve Report Center supplies a couple of more storage-related reports. You will see trending
data on storage consumption with time for multiple site collections inside a single graph, predict just how much data a website collection contains later on, evaluate the number of non- externalized products exist within particular size ranges, and much more.
If you have stubs of documents in SQL Server and also the actual BLOBs themselves in disparate exterior storage locations, decision concerning requirement is copying all this data to satisfy service level contracts (SLAs).
The very first reported TechNet article warns that whenever you develop an atmosphere with content databases exceeding 200 GB each (i.e. if you have enabled EBS or RBS), “[r]equirements for backup and restore might not be met through the native SharePoint Server 2013 backup.” Fortunately, DocAve Backup and Restore has your back here. Although you support all your stub databases and overlarge content databases, but you may also range from the BLOB stores themselves.
Remote BLOB storage will almost always boost the complexity of the SharePoint atmosphere, however when done effectively based on the guidelines outlined within this blog publish, it may lower storage infrastructure costs appreciably as well as improve performance under certain conditions. With DocAve’s numerous choices for finish-to-finish SharePoint administration, the majority of the initial hurdles and ensuing concerns around applying storage optimization happen to be simplified and streamlined.