CloudBD disks require access to a "storage remote" for storing all disk data blocks and metadata. This page will guide you through the creation of a Google storage remote that uses Google Cloud Storage (GCS) as well as any extra steps for improved security and performance.
Each Google storage remote requires its own GCS bucket for storing disks. Google storage remotes belong to the same Google region as their GCS bucket (chosen during bucket creation). GCS buckets have additional costs and higher latency when accessed by services from outside of their region. When using CloudBD disks from Compute Engine instances your Compute Engine instances must be located in the same region as your CloudBD storage remote. To create CloudBD disks in different Google regions a separate Google storage remote (and GCS bucket) is required for each region.
A Google storage remote can store any number of CloudBD disks so a single storage remote per Google region may be sufficient.
Quick Start Guide
In this Quick Start guide we will create a remote for a Google region that is secured by a new Google service account.
Select a project or create a new project that will contain the remote storage bucket and the service account.
Create Service Account
The CloudBD disk driver will require a Google service account to access the remote storage bucket. Any service account with admin access to the GCS bucket will work but for best security we recommend creating a new service account for CloudBD disks and then restrict its access to only the CloudBD buckets.
When creating a service account make sure you download or save the private key file in JSON format. It is only available at creation time of the key and is required to authenticate to your GCS storage remote.
- Go to IAM & Admin -> Service Accounts -> Create Service Account
- Enter a name for your CloudBD service account
- Leave the Project role empty (no permissions)
- Select Furnish a new private key -> Key type JSON
- Save and Download the private key json file
Create GCS Bucket
All disk data and metadata is stored in a GCS bucket. When creating a GCS bucket make sure to use the same location region as the Compute Engine VM instances that will use the CloudBD disks.
Note: CloudBD only supports buckets with Regional Storage Class.
- Go to Storage -> Create Bucket
- Enter a name for your bucket
- Select Storage Class: Regional
- Select the location
- Select Create
Grant Bucket Access
The service account created above will need storage admin access to the GCS bucket created for the disk data. This access can be provided to the service account on a per bucket basis.
- Go to Storage -> Browser
- Select the '⋮' icon on the far right of your GCS bucket
- Select Edit Bucket Permissions
- In Add Members enter the client email of your service account created above
- Select the Storage Admin role
- Click the Add button
Deploy Config File
Now that your Google storage remote has been created you need to create a storage remote config file and deploy it to the servers that will manage and use CloudBD disks. Storage remote config files are placed in the
/etc/cloudbd/remotes.d/ directory. These config files must end in ".conf" and use the basic INI config format of
key = value. Below are the required fields for a Google storage remote.
Declares the type of remote connection. For Google Cloud Storage only OAuth 2.0 is supported.
type = gcs_oauth_v2
The name of the bucket set up as a CloudBD remote.
bucket = <Your bucket name>
The client email of a Google service account with the necessary read and write permissions to the remote storage bucket.
client_email = <Your Google service account client email>
The private key of the Google service account with the given client_email.
private_key = <Your Google service account private key>
Whether to use http or https when communicating with GCS. We recommend using the 'http' protocol for GCS when attaching disks to Google virtual machine instances and using the 'https' protocol when attaching disks to servers located outside of Google's cloud environment. The protocol is optional and will default to using 'http' if not set. Using the http protocol is less CPU intensive but requires trust of the local network. The https protocol is more secure because it encrypts all network data but requires more CPU overhead when reading and writing to the CloudBD disks.
protocol = <http | https>
Example GCS Remote Config File