List of questions
Related questions
Question 39 - Professional Cloud Architect discussion
The application reliability team at your company this added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.
Which process should you implement?
A.
- Append metadata to file body- Compress individual files- Name files with serverName - Timestamp- Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save files to existing bucket.
B.
- Batch every 10,000 events with a single manifest file for metadata- Compress event files and manifest file into a single archive file- Name files using serverName - EventSequence- Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket.
C.
- Compress individual files- Name files with serverName - EventSequence- Save files to one bucket- Set custom metadata headers for each object after saving
D.
- Append metadata to file body- Compress individual files- Name files with a random prefix pattern- Save files to one bucket
Your answer:
0 comments
Sorted by
Leave a comment first