CPU
- See here for a full comparison of machine types: https://cloud.google.com/compute/docs/machine-types
- Intel Xeon and AMD EPYC platforms supported
- Predefined and custom machine types
- Supports Windows and Linux VMs
- Resources (CPU, RAM, DISK) can be customized for both types
- General purpose: E2, N2, N2D, N1
- Memory-Optimized: M2, M1
- Compute-Optimized: C2
- Accelerator-Optimized: A2
- 1 CPU Core on GCP = 1 Thread (not physical core)
- 2Gbps network bandwidth per core with a minimum of 10Gbps
- Maximum bandwidth = 32Gbps for VMs with >= 16 CPU
- 100Gbps available for machines with T4 or V100 GPUs attached
- Full list of CPU platforms available here: https://cloud.google.com/compute/docs/cpu-platforms
Storage
- Full details on available storage options available here: https://cloud.google.com/compute/docs/disks
- Disk types:
- Standard: Mechanical disks
- SSD: Array attached disks
- Local SSD: Locally attached disks. Data is not persistent
- Up to 3TB local SSD storage can be configured with 8 x 375GB disks
Networking
- VMs can have 1 internal and 1 external interface (VM is no aware of external interface)
- Firewall rules are applied to VMs based on Tag or network
- 2 types of load balancing are provided:
- Regional HTTPS
- Network LB
Images
- An image is applied to a VM and contains the following items:
- Boot loader
- Operating System
- File system
- Software
- Customizations
- GCP supports both public and custom images, Linux and Windows
- Premium images are charged per second (rather than minute)
- Custom images can be created and imported
Disk Options
The following disk options are availabe
feature | Support |
---|---|
Bootable | Persistent disks only |
Data Reundancy | Persistent disks only |
Snapshots | Persistent disks only |
Encryption at rest | Persistent disks and local SSD |
Persistent Disks
- All VMs comes with a persistent boot disk
- Disable the VM property:
"Delete boot disk when instance is deleted"
to prevent a boot disk from being removed when a VM terminates
- Network based block storage
- Supports snapshots
- Can be dynamically resized, even on running VMs
- Can be mounted to multiple VMs in read-only mode
- Can be Zonal or Regional
- Regional disks support active-active replication
- Data is encrypted at rest
- Choice of key management: google managed or customer managed
- Up to
128 persistent disks
can be attached to a VM or16 disks
for for shared core VMs (f1-micro and g1-small)
Local SSD
- Directly attached storage providing very high IOPs
- Up to 8 x 375GB disks can be attached to a VM
- Disks are ephemeral – data is lost when VM is stopped or terminated but is persistent across resets (soft reboot)
RAM Disk
- Faster than local disk but not as fast a memory
- Volatile – data is erased if VM is stopped or restarted
- Can be used to provide fast access at near-memory speeds using
tmpfs
VM Metadata
- VM configuration data is stored on a metadata server
- Useful for retrieving instance data during startup and shutdown
- Default metadata keys are present for every instance, making code reusable
- Metadata is stored in
key:value
pairs - Custom metadata can also be defined
Linux VM Access
- All VMs can be accessed from the Cloud console: https://console.cloud.google.com
- Linux VMs can be accessed in one of the following ways:
- OS login (SSH)
- SSH Keys from metadata
- Temporary access
OS Login (SSH)
- Allows Compute Engine IAM roles to be used for login and thus avoid SSH key configuration
- OS-Login Can be configured at instance, project or organization level by adding the key “enable-oslogin = TRUE” in metadata
- Instance Level: Configure custom metadata during or after creation by editing the instance in the GCP console
- Project Level: Configure project-level metadata in GCP console
- Organization Level: Configure metadata in the IAM Admin console
- Access from GCP console or gcloud cli, SSH keys are automatically created by Compute Engine
- Access from a 3rd party client such as putty requires an SSH key to be generated:
- Ensure username is added to the end of the key and it’s of the format “key-rsa <key> <username>” on a single line with no line breaks – see Managing SSH Keys
- If using a gmail account, replace all dots and @ signs with an “_” e.g. jon.doe@gmail.com becomes jon_doe_gmail_com
- Public keys must be added to user account with gcloud CLI, or API methods
- If connecting with Putty, ensure the private key is loaded into the profile
- Links a Linux user account to a Google identity
- Access can be controlled at an instance or project level
- Fine grained permissions can be assigned at the Google identity level e.g. sudo command privilege’s
- VM permissions are automatically updated with changes in line with IAM
- Linux account IDs can be synchronized with on-premises AD and LDAP
- Access is granted through SSH keys associated with the Linux user account
- 2 factor authentication is also supported
- Instances must have the guest environment installed
- One can think of guest environment to be loosely similar to vmtools deployed on ESXi VMs
- Automatically deployed when you create a standard instance
- Must be manually deployed if using a custom image
SSH Keys from metadata
- Manually manipulating metadata form instance is an advanced concept and is not without risk if misunderstood and incorrectly implemented
- Access is controlled by creating SSH keys and editing public SSH key metadata in GCP
- The following instance or project level permissions are required to manage metadata:
- Instance Level:
compute.instances.setMetadata
- Project Level:
compute.projects.setCommonInstanceMetadata
- Project Level:
iam.serviceAccounts.actAs
- Instance Level:
- SSH Keys are applied to instance or project level metadata to permit access to instances when a client presents an authorized public key. In order to do that, a client must have the corresponding private key stored on the client device
- A key point to note is that access with this method is not restricted by GCP IAM roles – i.e. a client does not have to be a project member to access an instance – just the key pair stored in metadata
- Format the public key thus:
ssh-rsa [KEY_VALUE] [USERNAME]
in a single line without line breaks - Add the public key at the appropriate level:
- Project-wide metadata to permit access to all Linux instances in a project
- Instance-level metadata to permit access to a specific Linux instance. Edit the instance and add the public key to metadata.
- Notice the “Block project-wide SSH keys” checkbox. This option blocks any project-level keys added to project metadata i.e. only specific keys are permitted access to this instance
Temporary Access
- Temporary access is simply a case of applying an IAM Policy to a resource to give access to it
- IAM policies simply binds role(s) to member(s) such as user or service account
- Follow the principle of least privilege i.e. grant access at the lowest level for a given resource rather than project level or higher
- Supported resources:
- Resources not listed above must be managed at a project, folder or organizational level
Windows VM Access
- Can be accessed through RDP or PowerShell
- Default firewall rules permit RDP on TCP 3389
- Direct RDP access over a public IP or VPN
- Use Identity Aware Proxy (IAP) if instance does not have a public IP
- If IAP cannot be used, then Chrome Remote Desktop is also an option
- The following diagram illustrates the various options
VM Lifecycle
A VM has numerous states it can pass through in its lifecycle as shown below
- Provisioning: Reserving resources for instance such as CPU, RAM etc.
- Staging: Configuring network, apply system image and boot up
- Running: Apply startup scripts and enable SSH/RDP access. If is system is reset from, it remains in a Running state even though the guest OS is rebooting
- Terminated:
- Enters this state when shutdown
- No charges whilst in this state
- Can either be restarted or deleted from this state
- Suspended: Place system in a paused state
- Terminated:
- If a preemptible instance is restarted, rebooted a timeout of 30s applies to allow for graceful shutdown. Otherwise, a 90s timeout applies
- An instances Availability Policy determines how a VM behaved during a maintenance event e.g. Host reboot. The default behavior is to live-migrate (a la “vmotion”) instance to another host but it can also be configured to be stopped instead
- On host maintenance = Migrate or Terminate VM Instance
- Automatic restart = On/Off and determines whether an VM is restarted should it stop