For a big data workload that requires shared access and NFS-based connectivity, which storage service should be utilized?

Boost your skills for the OCI Architect Associate Exam. Tackle multiple-choice questions, access hints and explanations. Excel in your certification!

The most suitable storage service for a big data workload that demands shared access and NFS-based connectivity is File Storage. This type of storage is specifically designed to handle workloads that involve shared file systems.

File Storage provides a managed file system with support for NFS, allowing multiple compute instances to access the same files concurrently, which is crucial for big data applications where parallel processing is often required. It offers scalability while ensuring high performance and consistency for applications that need to read and write data simultaneously.

The other storage options, while effective in their unique contexts, do not fulfill the requirements for NFS-based connectivity as effectively. Block Volume is typically used for individual virtual machine storage and provides high performance but does not inherently support shared access like File Storage does. Archive Storage is designed for cost-effective long-term storage of infrequently accessed data, lacking the low-latency performance needed for big data workloads. Object Storage, while scalable and ideal for unstructured data, does not provide a traditional file system interface or NFS support, which makes it less suited for shared access needs in big data scenarios.

Thus, for workloads requiring shared access with NFS, File Storage is the optimal choice.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy