MongoDB
4GiB per 10 Million Files - StorCycle's database should be sized based on the total number of objects scanned/migrated. The total size of the archive is irrelevant (i.e., 100TB transfered vs 10 TB transfered could equal the same database size due to the same number of files/folders).
10's of Billions of objects - StorCycle's database will grow based upon the total number of objects scanned and migrated. The actual size of each object does not impact the size of the StorCycle database. Perfomance testing has occured with 100's of billions of objects.
StorCycle scans between 6,000-12,000 files per second, on average.
StorCycle will scan files at a rate between 6-12,000 files per second.
Migrate and restore performance will be based on the following factors:
StorCycle is officially tested and supported on: Window Server 2019 and Red Hat Enterprise Linux v8.0
While StorCycle's features and performance can be modified with the YAML setting, Spectra recommends working with your Solution Architect to consult first and get these commands.
Yes. GPFS Source Systems must be configured for each network disk. For example, if your GPFS server is "gpfs/data" and your network shares are mounted into it as "gpfs/data/networkshare1, gpfs/data/networkshare2', StorCycle Source locations must be configured on the network share: "gpfs/data/networkshare1". If configured from the root "gpfs/data", StorCycle will be able to mgirate data but will not be able to restore back to the specific network share.
Exclude filters only apply to migrations. StorCycle will create database records for every object it discovers during a scan and filters cannot be applied.