This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Welcome to the IP4G Documentation

Documentation and usage guides

These Pages show you how to access and get started with IBM Power for Google (IP4G).

If you are new to IP4G, start with Getting Started.

The How To guides will take you through specific actions.

The Videos section has videos that may help you get going.

The Additional Information section has other helpful information, including an FAQ.

1 - Overview

IP4G is a platform for hosting IBM Power Workloads on Google Cloud (GCP)

IBM® Power for Google Cloud is an infrastructure as a service solution from IBM that you can use to deploy, manage, and consume PowerVM based virtual machines (LPARs) that are connected to the Google Cloud Platform. Virtual machine (VM) management is provided by a Google aligned experience that offers APIs, command line, and web-based console options.

The IBM Power for Google Cloud service is designed to deliver a public cloud-based experience with the same infrastructure capabilities that you run on premises. You can quickly deploy Power Systems VMs to meet your specific business needs. You can create a hybrid cloud environment that combines IBM Power benefits and services on the Google Cloud Platform for a hybrid cloud solution.

You can use the IBM Power Systems for Google Cloud service to host virtual machines that are running the AIX, IBM i, or Linux operating systems.

This service uses a capacity-based subscription model with monthly pricing. The subscription is based on a cloud instance plan, which is a collection of compute, memory, storage, and network resources.

1.1 - IP4G Service Features

Service Features of IBM Power for Google Cloud

Service features

The following are some of the key features for the IBM Power for Google Cloud service.

Google integrated billing

Cloud plans are subscribed to from the Google Cloud Platform (GCP) Marketplace and included in your monthly Google bill. The pricing for the Google cloud plan is based on a monthly subscription. The Google cloud plan does not have a term commitment, and you can cancel at any time. Billing is pro-rated for partial monthly usage.

Internal Google network access to GCP resources and services

Cloud instances are networked into a target GCP project by using Google Virtual Private Cloud (VPC) Peering technology. This VPC Peering enables Virtual Machines (VMs) on IBM Power System servers to obtain direct private access to GCP resources such as compute, cloud storage, and other services over the internal global Google network. You can use this connectivity for solutions spanning IBM Power Systems infrastructure and GCP resources. For more information, see Google VPC Peering.

Network design with a secure by default access model

VMs created on IBM Power Systems infrastructure are assigned IP addresses on the cloud instance private network within the associated GCP project. By default, IP addresses are internal to the project and only accessible from resources within the project. You have full control over network access to IBM Power Systems VMs.

You can control how the VMs are made accessible beyond their GCP project by using solutions such as front-end GCP-based application servers, VPNs, NAT gateways, jump servers for SSH access, or Google Direct Interconnect solutions to the data center.

OpenAPI compliant APIs, command line, and web console options that you can use to manage the lifecycle of VMs (LPARs) from creation to deletion. Operations are also available to manage VM images, data volumes, and networks.

The command line tool is called pcloud. To view the documentation for the pcloud command, run the pcloud docs command. Documentation links for the API and web console are provided as part of the management web console.

AIX license and support entitlement

The IBM Power for Google Cloud service includes a license to run the AIX operating system with support entitlement for supported AIX versions. This service offers a few stock AIX images that you can deploy. You can also bring your own custom AIX images based on OVA exports from IBM Cloud PowerVC Manager. You can also use AIX mksysb images along with a NIM server or AIX alt disk installation to create customized LPARs within cloud instances.

Enterprise IBM Power Systems infrastructure compatible with on-premises deployments and workloads

The IBM Power for Google Cloud service is composed of IBM PowerVM based Power Systems, including the latest POWER9 technology. The infrastructure includes the following attributes:

  • IBM Power 9 or Power 10 System Types (Varies by region)
  • 16-gigabit Fibre Channel connected storage
  • 25-gigabit networking for all servers
  • 100-gigabit network across server racks
  • HDD (3 iops/GB) and SDD (10 iops/GB) storage options for data volumes
  • Shared and dedicated processor options when you create guest VMs
  • Dynamic LPAR support for live add and remote of compute and memory to guest VMs
  • PowerVM NPIV-based storage virtualization for guest VMs (LPARs)
  • Dual VIOS
  • Redundant SAN and network fabrics
  • Redundant PDUs
  • Connectivity to GCP over dual interconnect zones, each with multiple aggregated links
  • PowerVC remote restart technology for improved guest VM availability with host failures
  • PowerVM LPM technology to sustain guest VM availability during disruptive infrastructure maintenance events

2 - Getting Started

Getting Started Guides for IBM Power for Google

Docs and information for getting started with a new subscription to IP4G.

2.1 - Before You Subscribe

Information to know before subscribing to IP4G

Before you subscribe to the IBM Power for Google Cloud service, review the following prerequisites:

  • Create a Google Cloud Project with a defined set of virtual private cloud (VPC) networks, which can be the default VPC for a project. For more information, see Projects and VPC networks.

  • Identify the network IP address space that you want to use for your cloud instance. Your IP address space must be compatible with the existing set of VPC networks in your Google Cloud project. If you want direct connectivity between your data center and the Google Cloud Platform (GCP), your cloud instance IP address space must be compatible with your on-premises networks. The IP address space uses the Classless Inter-Domain Routing (CIDR) notation to configure the service. This IP range must be a private IP range RFC1918 that does not overlap with any other IP ranges in your VPC or any advertised IP ranges coming from interconnects or VPNs that are used for on-prem resources. An example of an IP address that uses the CIDR notation is 172.16.1.0/24.

2.2 - Subscribing to IP4G

Information about IBM Power for Google Cloud

Determine which Google Cloud Billing Account you would like to use for the IBM Power for Google Cloud subscription.

Create a new Google Cloud Project that is associated with the Billing Account. This Google Cloud Project will be used to grant IBM Power for Google Cloud Administrators to the IBM Power for Google Cloud Marketplace Solution.

After you select the IBM Power for Google Cloud tile, review the available cloud instance plans and subscribe to the desired plan. Follow the workflow to register for the service and proceed to the initial configuration steps.

During the subscribing process, you must supply the information about your project, VPC, and the CIDR range for VMs created in your instance plan. This information is the basis of a series of gcloud commands that you run against your Google project to establish a connection to the service. This connection is a VPC Peering between your VPC and an IBM-managed tenant project over the Google network fabric. After the initial configuration is complete, access to the VM management GUI is enabled and your selected project is connected to your cloud instance. Next, you can create an LPAR and verify that you can connect to the LPAR.

You can change cloud instance plans to different size plans if capacity demands change over time.

To view a video on the subscription process, see here.

2.3 - Adding New Users

Information about IBM Power for Google Cloud

Initially, a single user is authorized to use a new IBM Power for Google Cloud subscription, this is the same user that completed the sign-up process. Additional users will need to be authorized to login to IBM Power for Google Cloud. Authorization requires the following.

  • A Google Cloud Project linked to a Google Billing Account with an active IBM Power for Google Cloud subscription
  • A User with a Google Cloud Identity
  • Google Cloud IAM Roles that grant the User access to the IBM Power for Google Cloud Marketplace solution.
  • The Account ID for your subscription. This requires an existing user. See locate your Account ID

Assign the new user the Editor IAM Role in the Google Cloud Project associated with the IBM Power for Google Cloud Billing Account.

NOTE: The Editor Role can be removed after the user has been authorized for IBM Power for Google Cloud. Converge recommends using a dedicated Google Cloud Project for authorizing users to access the IBM Power for Google Cloud Marketplace Solution. More about IAM Roles for Google Marketplace solutions can be found here. A restricted set of IAM permissions is under development.

The new user must navigate to the IBM Power for Google Cloud Marketplace solution using the link below. Ensure the correct Google Cloud project is selected.

NOTE: The Google Cloud project must be linked to a billing account where the IBM Power for Google Cloud plan/subscription has been purchased.

https://console.cloud.google.com/marketplace/product/ibm-sg/ibm-power-cloud-for-gcp

The Marketplace interface should show “Manage on Provider”. If “Manage on Provider” is not shown, verify the correct Google Cloud Project and IAM Roles are assgined.

image

After the new user clicks “Manage on Provider” they will be prompted to sign into their Google Cloud Identity. Once logged in, they will receive a login error from IBM Power for Google Cloud. The user is now pending authorization. Open a Google Cloud support request with the following content.

“We have granted a new user the appropriate IAM Roles and they have authenticated to IBM Power for Google Cloud using the MANAGE ON PROVIDER button. Please authorize [username@domain.com] for [Account ID]”

To add a ticket, see Create a Support Ticket

2.4 - Create a Support Ticket

Engaging support for IP4G

Support for IBM Power for Google Cloud is provided by Google Cloud Customer Care as Third-Party Technology Support using the Collaborative support model.

To learn more about the Third-Party Technology Support provided with Enhanced and Premium support from Google Cloud Customer Care see Getting Support for Google Cloud.

Open a support case for IBM Power for Google Cloud

Requirements:

To open a Google Cloud support case, follow the instructions in for Creating cases in Google Cloud Support portal.

When selecting the issue category, expand Partner Support, expand IBM Power Systems for Google Cloud, and select the most appropriate sub topic for your issue.

Select support case issue category

Google engages Converge on behalf of the client as needed for investigating possible issues with the IBM Power for Google Cloud infrastructure.

For AIX or IBM i operating system support see “Opening a support case with IBM”. In these cases, the problem is with the Operating System, not Google Cloud or IBM Power for Google Cloud.

Open a support case with IBM for operating system support

Support is available for versions of the AIX operating system that are currently in standard support. AIX versions that are in extended support are not supported for the IBM Power Systems for Google Cloud service. For more information, see AIX support lifecycle information.

You must use the IBM customer number that was provided during the subscription process for the IBM Power Systems for Google Cloud service.

To directly engage IBM for AIX or IBM i operating system support, go to the IBM Support portal.

Click Open a case, navigate to the Product field, enter AIX on Cloud or IBM i on Cloud. Complete all of the required information, and click Submit Case.

Locate your Account ID

You can find the Account ID in the IBM Power for Google Cloud Web Console or pcloud cli. In the examples below, the Account ID is E-01DF-1DFC-6D14-4701. Using the Web Console, click your name > Accounts to see your Account ID.

Web Console Getting Started with the Web Console

Using the pcloud cli.

$ pcloud config list
accountID: E-01DF-1DFC-6D14-4701
cloudID: 75a23c3671c1sjke85788b65552a74ec
cloudName: demo
region: us-east4

Locate your Cloud Instance ID

Your IBM Power for Google Cloud Account may have multiple Cloud Instances. If you are using two regions, you will have two Cloud Instance ID’s, one for each region. In the example below, the Cloud Instance ID is 75a23c3671c1sjke85788b65552a74ec. To locate the Cloud Instance ID, use the pcloud cli.

Using the pcloud cli.

$ pcloud config list
accountID: E-01DF-1DFC-6D14-4701
cloudID: 75a23c3671c1sjke85788b65552a74ec
cloudName: demo
region: us-east4

3 - How-to

How to Guides for Common IP4G Tasks

Documents and articles detailing how to do specific tasks in IP4G

3.1 - General How-To Documents

General how-to guides for IP4G

General guides and how-to documents for IP4G that apply to any operating system.

3.1.1 - Creating a virtual machine

Complete the following steps to create and configure an IBM Power for Google Cloud (IP4G) virtual machine (VM).

Creating an IP4G virtual machine instance

Follow these steps to create an IP4G virtual machine instance from the GUI. To create one from the CLI, see CLI VM Creation

  1. Navigate to the IP4G User Interface using the following URL: pforg
  2. Click Create Instance from the VM Instances of the interface.

If no customer-specific images have been created, import a stock image. See Adding Images to the Image Catalog

Configuring a Power Systems virtual machine instance

Follow these steps to configure a new virtual machine instance.

  1. Complete all the fields under the virtual machines section.

    The Create an instance window

    • Name - Input a name for the virtual machine.
    • VM pinning - Select one of the options. Note that VM pinning is not available in all regions.
    • Number of Instances - Use the slide bar to set the number of instances. When creating an IBM i VM, this can only be set to 1.
  2. When more than one instance is selected, Affinity Policy and Naming Convention information is available.

    The Affinity policy and Naming Convention controls

    • Affinity Policy - Select the proper option. Not available for IBM i.
    • Virtual Machine Naming Convention - Select Postfix or Prefix to add a suffix or prefix to VMs.
  3. Complete the Boot Image fields. Select the proper type from the drop-down menus.

    The Boot Image fields

    • Operating system - Select the VM’s operating system.
    • System Type - Select the VM’s operating system.
    • Image - Select the VM’s image.
    • Storage type - Select the storage type.
  4. Complete the Processing Profile fields.

    The Profile controls for processing profile

    • Processor - Select the correct processor type.
    • Cores - Use the slider to select the desired number of cores.
    • Memory - Use the slider to set the amount of memory, in GBs.
  5. Complete the Networks fields.

    The Attach existing network controls

    • Attach existing network - Select the network to attach from the listed options.
    • IP Address - Optional. Provide the VM’s TCP/IP address. If not set specifically, one will be assigned automatically.
  6. Complete the Volumes fields. The Create and attach new volume controls

    • Name - Use this field to put in a name for the volume.
    • Size - Use the slider to set the volume size, in GBs.
    • Quantity - Use the slider to set the number of volumes.
    • Shareable - Use this to toggle the ability for sharing volumes.
  7. Complete the various SSH Key fields. Choose to create a new SSH or use an existing.

    The SSH Key controls

    • New SSH Key

    The New SSH Key window

    • Choose existing SSH key

    The Choose existing SSH key field, showing examples

  8. Review your selections for accuracy and submit the form.

3.1.2 - CPU Types

CPU Types in IBM Power for Google Cloud Platform (IP4G)

A Virtual Machine’s (VM) CPU allocation is determined by its Entitlement. Entitlement represents the guaranteed minimum amount of physical CPU resources available to the VM. When you provision a VM and set its Entitlement, you ensure that the VM will receive at least that level of CPU performance when needed. The vCPU value, which indicates the number of virtual processors available to the VM, is derived from the Entitlement. For VMs using shared processors, the vCPU value is set to the Entitlement rounded to the nearest whole number. In the case of VMs with dedicated processors, the vCPU value is equal to the Entitlement.

There are two primary processor types. Shared and dedicated. Shared processors can be broken down into capped or uncapped shared processors.

CPU TypeIncrementDescription
uncapped shared0.25CPU entitlement is guaranteed, vCPU is set to Entitlement rounded up to the nearest whole number. CPU may consume up to the vCPU value if busy.
capped shared0.25CPU entitlement is guaranteed, vCPU is set to Entitlement, rounded up to the nearest whole number. VM Cannot consume more CPU than its entitlement.
dedicated1CPU entitlement is guaranteed, VM is allocated whole CPUs.

3.1.3 - Creating a new VM with an SSH key for root

You can set up one or more SSH keys for root login when you create new virtual machines (VM) on a Power cloud instance. The keys are loaded into the root’s authorized_keys file. SSH keys allow you to securely log in to a VM. You must use the available operating system options to create SSH keys. To generate SSH keys on a Linux® or Mac OS system, for example, you can use the standard ssh-keygen tool.

Setting up an SSH key to be used in a VM create

In this example, the user created a public key on a Linux-based GCP compute instance by using the ssh-keygen tool:

> $ cat .ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCtuQnQOc2k4zaGzE7b3xUMCjUy++s/9O9HE4fXSm7UNKoTY39zjQ8mhOwaA3HEo12tOdzdFDYHHWNOYufCcFFk61CAL6HyQGGClib1nFc1xUcgTI9Dee8zzaAsN8mIIr1CgbRELhvOsTv23U4QddpfjkcVoKfF0BAtxgauvooQdPZBoxa2rsD+BvcWnjglkYWG2aBbuzFvSl1fLMihjfej8w1lxbcsYEcJg2X96NJPLmLsEJ+XwoXfVuv0X4z8IoBzZ8UbyTlrDv73EAH34GViYfZFbrIaNnwnz/f/tuOKcINihH72YP+oZn9JeiHQ+hKpMqJAmOK2UIzYr3u+79n9 testkey

To use an SSH key with a VM create operation, you must first add the public key to the cloud instance by using the pcloud compute sshkeys create command. To add the ssh-keygen-generated public key, enter the following command (replacing the public key value with your own):

Important: You must enclose the --publickey value in quotations.

$ pcloud compute sshkeys create testkey --publickey "ssh-rsa AAAAB3NzaC
1yc2EAAAADAQABAAABAQCtuQnQOc2k4zaGzE7b3xUMCjUy++s/9O9HE4fXSm7UNKoTY39zjQ8mhOwaA3HEo12tOdzdFDYHHWNOYufCcFFk61CAL6HyQGGClib1nFc1xUcgTI9Dee8zzaAsN8mIIr1CgbRELhvOsTv23U4QddpfjkcVoKfF0BAtxgauvooQdPZBoxa2rsD+BvcWnjglkYWG2aBbuzFvSl1fLMihjfej8w1lxbcsYEcJg2X96NJPLmLsEJ+XwoXfVuv0X4z8IoBzZ8UbyTlrDv73EAH34GViYfZFbrIaNnwnz/f/tuOKcINihH72YP+oZn9JeiHQ+hKpMqJAmOK2UIzYr3u+79n9 testkey"
SSHKey created: testkey

To confirm that the key was successfully added, use the pcloud compute sshkeys list command.

$ pcloud compute sshkeys list
Name      Key                                          CreationDate
testkey   ssh-rsa AAAAB3NzaC1y...UIzYr3u+79n9 testkey  2019-07-26T18:21:56.030Z

Creating a VM instance with a configured SSH key

Now that you’ve added the key to the cloud instance, you can create a new VM with the key by using the following command:

$ pcloud compute instances create keytest-vm -i AIX-7200-03-03 -t shared -p 0.5 -m 5 -n gcpnetwork -k testkey

The preceding VM create operation resulted in a new AIX VM with an IP address of 172.16.7.16. You can now SSH to the AIX VM can from a connected system, which is configured with the private key for testkey. In the following example, the connecting system is a GCP x86 compute instance with direct connectivity to the Power cloud instance network.

$ ssh root@172.16.7.16
Enter passphrase for key '/home/keytest/.ssh/id_rsa':
Last login: Fri Jul 26 16:53:22 CDT 2019 on ssh from 10.150.0.11
*******************************************************************************
*                                                                             *
*                                                                             *
*  Welcome to AIX Version 7.2!                                                *
*                                                                             *
*                                                                             *
*  Please see the README file in /usr/lpp/bos for information pertinent to    *
*  this release of the AIX Operating System.                                  *
*                                                                             *
*                                                                             *
*******************************************************************************
# oslevel -s
7200-03-03-1914
#

On this AIX VM, you can find the testkey value in the authorized_keys file.

> \# cat .ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCtuQnQOc2k4zaGzE7b3xUMCjUy++s/9O9HE4fXSm7UNKoTY39zjQ8mhOwaA3HEo12tOdzdFDYHHWNOYufCcFFk61CAL6HyQGGClib1nFc1xUcgTI9Dee8zzaAsN8mIIr1CgbRELhvOsTv23U4QddpfjkcVoKfF0BAtxgauvooQdPZBoxa2rsD+BvcWnjglkYWG2aBbuzFvSl1fLMihjfej8w1lxbcsYEcJg2X96NJPLmLsEJ+XwoXfVuv0X4z8IoBzZ8UbyTlrDv73EAH34GViYfZFbrIaNnwnz/f/tuOKcINihH72YP+oZn9JeiHQ+hKpMqJAmOK2UIzYr3u+79n9 testkey

3.1.4 - Importing an OVA file into your Power cloud instance

You can import an OVA file to bring a new VM image and any attached data volumes into your Power cloud instance. An OVA file is an industry standard format that is used to package VM boot disk images and any related data volume images. If you are running an IBM Cloud PowerVC Manager 1.4.1, or later, landscape in your local environment, you can generate OVA files from your Power LPARs.

To bring an OVA file into your cloud instance image catalog and deploy it as a new VM, complete the following steps:

  1. Start with a local OVA file that was generated by your existing PowerVC instance.

  2. Upload the OVA file into a selected Google Cloud Storage bucket.

  3. Set up access keys to the Google Cloud Storage bucket.

    Note: Issuing the operation that imports the OVA file into your cloud instance provides the access keys.

  4. Import the OVA file by using either the web UI or running the pcloud command.

  5. After the import operation is complete, you can deploy a VM by using the new image in your image catalog.

Setting up a cloud storage bucket

You can set up a cloud storage bucket by using the pcloud command to import OVA files.

To begin, you must create a cloud storage bucket. The following graphic provides an overview of a user-created cloud storage bucket named power-ova-import-bucket. The Link for gsutil field contains access information that is an important part of the import operation.

Cloud bucket overview

To enable access to the cloud storage bucket for the Power Systems IaaS management service, you must create a storage access key. You can create a storage access key in the settings area of the cloud bucket management UI.

Cloud bucket settings

Selecting the create a new key option generates the access key and secret as shown in the following example:

Cloud bucket keys

After you generate an access key and secret, the cloud storage bucket can hold OVA files and be used with the Power IaaS management service image import feature. You can upload files to the cloud storage bucket by using the upload files feature of the web UI or with the gsutil command. For more information, visit the Google Cloud Storage Documentation site.

In the following example, a user created an OVA file that is named AIX-7200-03-03.ova.gz and placed it in the cloud storage bucket.

Cloud bucket with OVA file:

Importing the OVA file into your cloud instance by using the pcloud command

The pcloud compute images import command imports an OVA file. For pcloud compute images import command usage and flag details, refer to the following code block:

$ pcloud compute images import -h

Import a new Image for a Cloud Instance.

Usage:
  pcloud compute images import <ImageName> --bucketname <Bucket> --filename <ImageFileName> --accesskey <AccessKey> --secretkey <SecretKey> [flags]

Flags:
  -a, --accesskey string     Cloud Storage Access Key (required)
  -b, --bucketname string    Cloud Storage Bucket (bucket name plus optional folders): bucket-name[/folder/../] (required)
  -f, --filename string      Cloud Storage Image File Name, should end with .ova, .ova.gz, .tar, .tar.gz or .tgz (required)
  -h, --help                 help for import
  -s, --secretkey string     Cloud Storage Secret Key (required)
  -t, --storagetype string   Storage Type (must be one of {"standard", "ssd"}) (default "standard")

Global Flags:
  -F, --format string      Available formats: 'table', 'yaml', 'json', 'csv'.
                            Default is command specific.
                            Can be used with describe and list subcommands.
  -D, --log.dir string     Override Log file directory
  -L, --log.file string    Override Log file name
  -V, --verbosity string   Override Log verbosity
  • ImageName - the argument that assigns a name to the image after you import it into your image catalog.
  • Bucket - the argument that provides the gsutil path to the cloud storage bucket where the OVA file resides. The path specification can include the gs:// prefix or ignore it.
  • ImageFileName - the name of the OVA file residing in the cloud storage bucket.
  • AccessKey - the generated storage access key. See the cloud bucket management UI settings.
  • SecretKey - the generated secret key. See the cloud bucket management UI settings.

The following example shows a customer importaning an OVA file by using the pcloud command:

pcloud compute images import aix72-ova-import -p gs://power-ova-import-bucket -f AIX-7200-03-03.ova.gz -a GOOGL2ET2IZLPJ52AOTMKZ3B -s UFmy48unWpAUs3jt1y3NSe91bUL7UhW32LaQSRo0
Image created with "b05cde35-680b-47b6-85a8-b38404e5e64e" ID while doing "import" operation on "power-ova-import-bucket/7200-03-03.ova.gz" Image (complete Image import is not immediate)

Note: The pcloud command returns immediately. However, the actual time for the import operation to complete depends on the OVA file size.

The pcloud compute images describe command monitors the progress of the import operation. While the import operation is in progress, the image state is queued as in the following example:

$ pcloud compute images describe aix72-ova-import

imageID: d090596b-3f55-4034-90d4-e519ff9e737e
name: aix72-ova-import
cloudID: ""
description: ""
size: 0
operatingSystem: aix
architecture: ppc64
state: queued
containerFormat: bare
diskFormat: raw
endianess: ""
creationDate: "2019-07-19T15:35:39.000Z"
updateDate: "2019-07-19T15:35:39.000Z"

After the import operation finishes, the image state transitions to active.

$ pcloud compute images describe aix72-ova-import

imageID: d090596b-3f55-4034-90d4-e519ff9e737e
name: aix72-ova-import
cloudID: ""
description: ""
size: 20
operatingSystem: aix
architecture: ppc64
state: active
containerFormat: bare
diskFormat: raw
endianess: big-endian
creationDate: "2019-07-19T15:35:39.000Z"
updateDate: "2019-07-19T15:40:50.000Z"

The image is now part of the image catalog for the cloud instance and can be used to create new VMs. You can also delete the OVA file from the cloud storage bucket and remove the access keys.

$ pcloud compute images list

ImageID                               Name
8f718bb5-495c-4a0d-b537-d2ad4b03f8c1  AIX-7200-03-03
d090596b-3f55-4034-90d4-e519ff9e737e  aix72-ova-import

Creating a new VM with the imported image

You can create a VM with the newly imported image by typing in the following command:

$ pcloud compute instances create import-test-vm -t shared -p 1 -m 6 -n gcp-network -i aix72-ova-import

"import-test-vm" VM Instance being created (complete VM Instance creation is not immediate)

After a short period, the VM is deployed and is ready for access.

$ pcloud compute instances describe import-test-vm

instanceID: 8ac6c2eb-8497-444e-9aac-5b9b31a97aed
name: import-test-vm
cloudID: 7f16fae4f3f54d8bb62f75645db56905
processors: 1
procType: shared
memory: 6
migratable: false
status: ACTIVE
health: OK
systemType: IBM S922
imageID: d090596b-3f55-4034-90d4-e519ff9e737e
networks:
- ipAddress: 192.168.0.10
  macAddress: fa:86:bc:91:9d:20
  networkName: gcp-network
  networkID: 8e72b5cc-9e50-4b06-bc56-eb4e1781eefe
volumeIDs:
- 122405f4-14a9-49f0-a665-2b3c08f4a3f4
creationDate: "2019-07-19T16:20:49.000Z"
updateDate: "2019-07-19T16:20:49.000Z"
$ pcloud compute instances console import-test-vm

console: https://pforg.ibm.com/console/index.html?token=<token>

To verify that the VM is working correctly, log into the system using the AIX console.

AIX Console:

3.1.5 - Capturing and exporting a virtual machine

Virtual machine (VM) instances can be captured and exported from the IBM Power for Google Cloud Platform (IP4G) service. This can be done through either the command line interface (CLI) or web interface. The captured image is stored as a new volume on the back end storage. A captured image can then be exported to Google Cloud Storage. Images are exported in an Open Virtualization Appliance (OVA) file. OVA files are compressed using gzip before export to Google Cloud Storage.

When capturing an image, an export destination of “image catalog” and/or “cloud storage” can be selected. The image catalog resides in the customer’s IP4G storage area. It can be used as a template to create new VMs. The cloud storage option transfers the image to Google Cloud Storage immediately. Images in the image catalog are transferrable to cloud storage at a later date as well.

Only one Image Capture and Export or Import function can be performed at a time.

Flush file system buffers to disk

Images captured from running VMs will be captured in a “crash-consistent” state. For best results, when possible, capture images with the VM shut down. If a VM cannot be stopped before capture, it is recommended to flush the file system buffer cache to disk. Use the following commands to accomplish this:

  • IBM i: Use the following command to flush all buffers to disk. Do this prior to capturing the image to ensure data integrity.
CHGASPACTOPTION(*FRCWRT)
  • AIX or Linux: Use the following command to flush file system bufers to disk:
sync; sync; sync

Performing capture and export via the IP4G user interface

Use the following steps to perform a capture and export through the IP4G interface.

  1. Navigate to the IP4G Console. Select the desired virtual machine to capture.
  2. Select the Capture and Export icon in virtual machine instance view. The icon appears in the upper left corner.
  3. All volumes for the virtual machine are automatically captured by default.
  4. Determine where the volume backed image or OVA will be exported. Either: image catalog, Cloud Storage, or both.
  5. Provide the captured image a Name.
  6. Optional: when exporting to Cloud Storage, specify the following additional parameters:
    • Bucket name and any optional folders.
    • Access and Secret Keys.
  7. Select Capture and export.
  8. After a successful capture or export, a confirmation message is displayed. It will read “When large volumes (in size and/or quantity) are selected, export processing may take a significantly long period of time to complete.”
  9. Find the newly exported image by completing either one of the following tasks:
    • If Cloud Storage was selected for the export, navigate to the Cloud Storage bucket in GCP.
    • If image catalog was selected for the export, navigate to Boot images in the IP4G user interface.
  10. Optional: volume backed images in image catalogs can also be exported to Cloud Storage. After choosing the desired Boot Image, select the Export function on the top of the screen.

Performing capture and export using the pcloud CLI

The pcloud CLI can also be used to capture and export a virtual machine image.

The pcloud compute instances capture command can be used to capture a virtual machine image. The image can be exported to an image catalog, Cloud Object Storage, or both.

Capture VM Instance to image catalog:

pcloud compute instances capture <InstanceName> --destination image-catalog --name <ImageName>

Capture VM Instance to Google Cloud Storage:

pcloud compute instances capture <InstanceName> --destination [cloud-storage|both] 
--name <ImageName> --bucketname <Bucket> --accesskey <AccessKey> --secretkey <SecretKey> [flags]

Use the following command to view exported images in the image catalog:

pcloud compute images list

3.2 - AIX How-To Documents

3.2.1 - Preparing systems for migration

Use the following prerequisites and recommendations to ensure a successful migration to IBM Power for Google Cloud (IP4G).

For AIX systems, ensure that the operating systems are running the following minimum versions:

AIX VersionMinimum TL/MLNotes
5.35300-12-04Only supported within a Versioned WPAR
7.17100-05-06
7.27200-05-01
7.37300-00-01

Additional software requirements:

  • The devices.fcp.disk.ibm.mpio package must not be installed. Uninstall it if necessary.

Additional recommendations:

  • Install cloud-init. This can be done using dnf, yum, or RPM. There are several prerequisites to install for cloud-init. The cloud-init software package is required to leverage some features. Those features include:
    • Image capture and deploy.
    • Automatic configuration of IP addresses, through the IP4G interface.
  • Update MPIO settings for each disk to set algorithm to shortest_queue or round_robin:
    chdev -P -l hdiskX -a algorithm=shortest_queue -a reserve_policy=no_reserve
    
  • Reboot for the attribute changes to take effect.

3.2.2 - Migrating to IP4G using a mksysb and alt_disk_mksysb

Use a mksysb from an existing system to migrate into IP4G. Do this by building a new system from a stock image, and using the alt_disk_mksysb command. The following steps highlight how to do so using the pcloud CLI. However, the IP4G specific steps can also be performed from the GUI.

Capturing a Source System mksysb

First, check if the fileset devices.fcp.disk.ibm.mpio exists. To do this, execute the following:

lslpp -Lc | grep devices.fcp.disk.ibm.mpio

If the fileset is installed, there are two ways to handle this. Either it must be removed from the system before the mksysb is created, or when running alt_disk_mksysb. Doing that will involve using the alt_rootvg_op command to wake the alt disk and remove the fileset.

Customers are responsible for evaluating the impact of removing the fileset in their environment.

To remove execute:

installp -u devices.fcp.disk.ibm.mpio

To begin, take a mksysb on the source system. It is recommended that the rootvg of the source system is not mirrored. If it is, edit the image.data file to unmirror it before restoring it. Details for that are included below. If the source system is running AIX 7.2 or higher, using the following command:

mksysb -C -i /path/to/hostname.mksysb

If the source system is running AIX 7.1, use the following command:

mksysb -i /path/to/hostname.mksysb

Build the target system

Build the system to do the alt_disk_mksysb on. This system will boot at first from a stock AIX image. Then after the mksysb restore, it will boot from the restored AIX image. If a stock image has not already been imported, see Obtaining a VM Image

LINK

To get started, gather the following information:

  • Use a hostname that will be the final hostname in IP4G.
  • Use a stock image that most closely matches the source system. Later TL/SP levels are OK.
  • Use the desired CPU, Memory, CPU Type, and Network settings for the final host to have.

Example:

pcloud compute instances create hostname –image AIX-7200-05-09 –network gcp-network -c 0.25 -m 8 -t shared

Add disks to the target system

Two disks are needed. One for temporary storage for the mksysb. The other for restoring the mksysb to alt_disk_mksysb. Wait for the new instance to build.

  1. First, build the target disk for the mksysb. It needs to be large enough to hold the source system rootvg. Change the size and disk type appropriately for the system.

    pcloud compute volumes create hostname-rootvg -s 20 -T ssd
    
  2. Log into the new target system from the console as root. Then, run cfgmgr to discover the new disk. There should now be two disks: hdisk0, the original stock AIX image disk, and hdisk1, the target disk for the mksysb. One more disk is needed to hold the mksysb to restore. The easiest to cleanup is to add it to the rootvg, and expand /tmp.

  3. Use the pcloud command to create a new disk, changing the size to be sufficient for holding the mksysb:

    pcloud compute volumes create hostname-tempdisk -s 20 -T ssd
    
  4. Log into the target system, and discover the new disk with cfgmgr. The new disk should be hdisk2. To validate which is which, run: lsmpio -qa

  5. Add the disk to the rootvg. Note that it is important to only add the temporary disk here: extendvg rootvg hdisk2.

  6. Add space to /tmp chfs -a size=+20G /tmp

Restore the mksysb on the target system

Use the following to restore the mksysb on a target system.

  1. Copy the mksysb file from the original source system to the new target system.

  2. Place it in /tmp. Use any preferred method, such as scp, for transferring the mksysb. Note that if the original rootvg was mirrored, unmirror it before using alt_disk_mksysb. Do this by restoring the image.data file and editing it. It must be edited so the PPs line for each LV is equal to the LPs line. To restore image.data, use this command:

    restore -xqvf /tmp/hostname.mksysb ./image.data
    
    • If this had to be done, specify to use the new image.data when restoring. Add the flag -i /tmp/image.data to the alt_disk_mksysb command.

    • To restore the mksysb use the alt_disk_mksysb command:

    alt_disk_mksysb -m /tmp/hostname.mksysb -d hdisk1 -z -c /dev/vty0
    
    • This will automatically set the bootlist to boot off of the new volume, hdisk1.
    • Reboot: shutdown -Fr
  3. Confirm the VM has booted from the disk containing the restored mksysb. In this example, that would be hdisk1. Use lspv to validate which disks / vg’s are present:

    lspv
    

Clean up the temporary disks

Use the following to clean up the temporary disks.

  1. Use exportvg to remove the old rootvg exportvg old_rootvg rmdev -dl hdisk0.

    rmdev -dl hdisk2
    
  2. Use the pcloud command to set the new rootvg volume as bootable:

    pcloud compute volumes update hostname-rootvg --bootable yes
    
  3. Use the pcloud command to clean up the old hdisks. Find the old boot disk name using:

    pcloud compute instances describe hostname
    

Set the old disk as not bootable

Set the old disk so it is not bootable. Match the name to the boot-0 volume from the instances describe output. To do this use:

pcloud compute volumes update hostname-d4751509-00000b25-boot-0 --bootable no

Delete the original boot volume and the temporary disks

Use the following to delete the original boot volume, and the temporary disks.

pcloud compute volumes delete hostname-d4751509-00000b25-boot-0

pcloud compute volumes delete hostname-tempdisk

3.2.3 - AIX MPIO Recommendations

This document outlines Multipath I/O (MPIO) best practices for IBM Power for Google (IP4G) deployments, focusing on actions customers can take to ensure optimal performance and availability.

Configuration

Converge handles the underlying MPIO configuration, including redundant paths, adapter diversity, and fabric management.

You can learn more about the general MPIO configuration from the official IBM documentation:

It is important customers understand their Application and select MPIO policies that best suit their Application requirements.

Key Considerations:

  1. Redundant Paths: Converge provides four physical paths to the backend storage, distributed across two VIOS, for enhanced redundancy.
  2. Dual Fabric Fiber Channel: Converge uses dual fabric Fiber Channel for all paths to minimize single points of failure.
  3. Pathing Policy: Customers should understand and adjust the MPIO pathing policy if needed. Common options include:
    • Round Robin: Distributes I/O requests evenly across available paths.
    • Shortest Queue: Directs I/O to the path with the least congestion.
    • Failover Only: Designates a primary path and uses alternative paths only when the primary fails.

Monitoring Available Paths

Regularly monitor the status of MPIO paths to proactively identify potential issues. Customers should consider integrating these MPIO status into their monitoring and alerting. Sample commands for monitoring:

lspath:

lspath

This command displays path status for all devices.

lspath -l <device_name> 

This command displays all paths associated with a specific device, including their status (Available, Defined, Failed).

lsmpio:

lsmpio

This command shows detailed information and status for all devices and paths, including their status and path status.

lsmpio -l <device_name>

This command provides detailed information for a specific device.

Scheduled Maintenance

Converge manages all hardware and VIOS maintenance, including firmware upgrades and network changes. Converge sends notifications in advance of any planned maintenance.

Before Maintenance:

  • Check Path Status: Use lspath or lsmpio to get a baseline of current path status. This will help identify any discrepancies after maintenance.

  • Resolve any Down Paths: If paths are discovered as down they should be fixed prior to maitnenance to avoid an outage. A standard method for doing so is to:

    • Find the failed paths using lspath, note the hdisk and fscsi device
    • Remove the failed paths using rmpath -l hdiskX -p fscsiY
    • Rediscover all paths using cfgmgr
    • Use lspath to verify the path state

After Maintenance:

  • Verify Path Status: Use lspath or lsmpio again to confirm that all paths have recovered and are in the “Available” state.

  • Recover Paths: Sometimes AIX does not automatically recover paths. During these scenarios, customer should attempt to recover the paths. A standard method for doing so is to:

    • Find the failed paths using lspath, note the hdisk and fscsi device
    • Remove the failed paths using rmpath -l hdiskX -p fscsiY
    • Rediscover all paths using cfgmgr
    • Use lspath to verify the path state

Report Issues: If there are any issues with pathing or storage connectivity after maintenance, promptly report them to Converge for resolution.

By following these guidelines and proactively monitoring MPIO paths, customers can ensure the high availability and performance of their applications running on IBM Power for Google.

3.2.4 - AIX TCP/IP Settings

Optimizing TCP/IP Settings for Improved Network Performance in IP4G

This document provides guidance on adjusting TCP/IP settings in your IP4G environment within Google Cloud to potentially enhance network performance. These settings are intended as starting points and may require further tuning based on the specific needs of your applications and virtual machines.

Note: Before making any changes, ensure you have a baseline understanding of your current network performance. This will help you assess the impact of any adjustments made.

The following commands can be used to modify the TCP/IP settings:

chdev -l en0 -a tcp_sendspace=2049152
chdev -l en0 -a tcp_recvspace=2049152
chdev -l en0 -a rfc1323=1
chdev -l en0 -a mtu=1460
no -p -o sb_max=8196608
no -p -o tcp_nodelayack=0
no -p -o sack=1
chdev -l en0 -a mtu_bypass=on
no -p -o tcp_sendspace=2049152
no -p -o tcp_recvspace=2049152

Explanation of Settings

  • tcp_sendspace & tcp_recvspace: These settings control the send and receive buffer sizes for TCP connections. Increasing these values can improve performance, especially for high-bandwidth connections.
  • rfc1323: Enables TCP extensions defined in RFC 1323, including Timestamps and Window Scaling, which can improve performance on high-latency connections.
  • mtu: Sets the Maximum Transmission Unit (MTU) size. This value determines the largest packet size that can be transmitted over a network. In Google Cloud, the default VPC MTU is 1460 bytes. While you can adjust this to a value between 1300 and 8896 bytes (inclusive), it’s generally recommended to keep the MTU at 1460 to ensure compatibility within the Google Cloud environment and avoid potential fragmentation issues. If your VPC is configured with a custom MTU, ensure the mtu setting on your IP4G instances matches the VPC MTU. If your GCP VPC is at the default 1460 MTU, your IP4G AIX instances should use an MTU of 1440.
  • sb_max: Sets the maximum socket buffer size. Increasing this value can improve performance for applications that utilize large socket buffers.
  • tcp_nodelayack: Disables the Nagle algorithm, which can improve performance for certain applications by reducing latency. However, it may increase network overhead.
  • sack: Enables Selective Acknowledgment (SACK), which can improve performance in the presence of packet loss.
  • mtu_bypass: Allows packets larger than the MTU to be sent, potentially improving performance for certain applications.

Evaluating the Results

After implementing these settings, it’s essential to monitor your network performance to determine their effectiveness. Several tools can assist in this evaluation:

  • Network Monitoring Tools: Utilize tools like netstat, tcpdump, or Wireshark to monitor network traffic and identify any bottlenecks or performance issues.
  • Performance Benchmarking Tools: Employ tools like iperf3 to measure network throughput and latency before and after applying the settings.
  • Application-Specific Monitoring: Monitor the performance of your applications to assess the impact of the TCP/IP adjustments on their behavior.

Remember: These settings are starting points, and further adjustments may be necessary based on your specific environment and application requirements. Continuously monitor and fine-tune these settings to optimize your network performance.

Additional Considerations

  • Google Cloud Network Infrastructure: When adjusting TCP/IP settings, consider the characteristics of your Google Cloud Virtual Private Cloud (VPC) network. Factors like the configured MTU (typically 1460 bytes), subnets, firewall rules, and any network virtualization layers can influence network performance. Ensure your settings are compatible with your VPC configuration and don’t introduce unintended bottlenecks.
  • Application Requirements: Different applications have varying network performance needs. Research and understand the specific requirements of your applications to fine-tune the settings accordingly. For example, applications sensitive to latency might benefit from disabling tcp_nodelayack, while those prioritizing throughput might benefit from larger send and receive buffers.
  • Virtual Machine Configuration: If you’re running virtual machines on Compute Engine, ensure the virtual network interfaces are configured correctly. Verify that the machine type provides sufficient network bandwidth and that no resource limitations on the VM instance are hindering network performance.

By carefully adjusting and monitoring your TCP/IP settings, you can potentially enhance the performance of your IP4G environment and ensure optimal network efficiency for your applications.

3.2.5 - Install gcloud SDK on AIX

Installing the gcloud sdk on AIX will allow you to download and upload from Google Cloud Storage buckets, as well as controlling other aspects of your google cloud environment. In AIX, it is primarily used for interacting with Storage Buckets and objects.

This Guide is not comprehensive, as covering all AIX versions and types is not possible. Note that it is easiest on AIX 7.3, as it requires python 3.8 or above. This example assumes a system built by downloading the AIX 7.3 TL1 stock image.

First, prepare your filesystems for new content

chfs -a size=+2G /opt
chfs -a size=+500M /tmp
chfs -a size=+500M /var

Next, I recommend you update your system using SUMA. To do this, we’ll clear out /usr/sys/inst.images first

rm -rf /usr/sys/inst.images/*
smitty suma

Select Download Updates Now (Easy)
Select Download All Latest Fixes

Once those have downloaded, update your system using

smitty update_all

For the directory, enter /usr/sys/inst.images
Change ACCEPT new license agreements? to yes

Once those updates have installed run

updtvpkg
dnf update python3 dnf

You should now be ready to install requisite software for the gcloud sdk

dnf install curl coreutils tar git bash python3-pip

Change your path to use the new gnu utilities

export PATH=/opt/freeware/bin:$PATH

Download the gcloud sdk installer and run it

curl https://sdk.cloud.google.com | bash

For an installation directory use /opt/freeware/
For Do you want to help improve the Google Cloud CLI (y/N)? say n

You will now see:

ERROR: (gcloud.components.update) The following components are unknown [gcloud-crc32c].

You can disregard this. You may wish to switch to a non-root user for the remaining steps.

Set your path

export PATH=/opt/freeware/google-cloud-sdk/bin/:$PATH:/opt/freeware/bin

Now you can run the google cloud sdk:

gcloud auth login

Follow the login prompts, pasting in the code to authenticate.

Adjust your crc settings. Using if_fast_else_skip is faster and uses less CPU, but also does no crc checking.

gcloud config set storage/check_hashes if_fast_else_skip

or

gcloud config set storage/check_hashes always

You should now be able to list the content of buckets you have access to, and download files.

gcloud storage ls gs://<bucketname>
gcloud storage cp gs://<bucketname>/<filename> /path/to/download/

3.2.6 - RMC details and troubleshooting

This article provides details on Resource Monitoring and Control (RMC). Also presented below are troubleshooting methods for common problems.

Use the methods below to address issues. If unable to resolve the issue, reach out for support. For more information about contacting support, see Obtaining a VM Image LINK

RMC details and troubleshooting

This article provides details on Resource Monitoring and Control (RMC). Also presented below are troubleshooting methods for common problems.

What is RMC?

Management consoles use RMC to perform dynamic operations on a virtual machine (VM). RMC connections are routed through a dedicated internal virtual network using IPv6. That network’s configuration prevents a VM from communicating with another VM.

How to troubleshoot RMC

The methods below can help troubleshoot common problems with RMC. Most common is a VM cannot be modified online and is in an unhealthy state. In the example below, the Health of the virtual machine is listed as “Warning”.

$ pcloud compute instances list
InstanceID                            Name             Status   Health   IPs
12345678-9abc-123a-b456-789abcdef123  lpar1            ACTIVE   WARNING  [192.168.1.5]

Restart RMC

Restarting RMC is the most common solution.

/usr/sbin/rsct/bin/rmcctrl -z
/usr/sbin/rsct/bin/rmcctrl -A
/usr/sbin/rsct/bin/rmcctrl -p

Be aware that layered software using Reliable Scalable Cluster Technology (RSCT) will be impacted. For example, this will trigger an immediate failover in PowerHA environments.

Validate RSCT version

Validate the version of the RSCT. Methods for this depend on the operating system. The RSCT packages must be at version 3.2.1.0 or later.

  • AIX
lslpp -L rsct.*
  • RedHat
rpm -qa | grep -e rsct -e src

Gathering RMC information

Use the following to gather information about the RMC. This information can be helpful in resolving many issues.

/usr/sbin/rsct/bin/lsnodeid
lsrsrc IBM.MCP
/opt/rsct/bin/rmcdomainstatus -s ctrmc

Validating Connectivity

Validate the connectivity by using the methods below.

  1. Verify that the en1 interface has an IPv6 address beginning with fe80::
    • For AIX use: netstat -in
      # netstat -in
      Name   Mtu   Network     Address                 Ipkts     Ierrs        Opkts     Oerrs  Coll
      ...
      en1    1500  fe80::ecad:f1ff:febe:ea13              711114     0           711198     0     0
      ...
      
      Make sure the following lines are uncommented in /etc/rc.tcpip:
      start /usr/sbin/autoconf6 "" " -i en1"
      start /usr/sbin/ndpd-host "$src_running"
      
      Then, execute the following: autoconf6 -i en1
    • For Linux use: ip addr show
  2. Get the HMC or Novalink IPv6 address from the virtual machine. Use this command: lsrsrc IBM.MCP
  3. Ping the IPv6 address. If the ping fails, please escalate to support.
  4. Telnet ipv6_address 657. If a ping is successful, but telnet fails to connect, there may be a firewall issue.

Verify the services are active

Use the following command to verify if the services are active.

lssrc -s ndpd-host

If it isn’t active, use the following:

startsrc -s ndpd-host

3.3 - IBM i How-To Documents

3.3.1 - Accessing IBM i Virtual Machines

This article covers how to access IBM i virtual machines (VMs) in IBM Power for Google Cloud (IP4G). Use the following information to access newly created IBM i virtual machines (VMs). Typically, end-users access IBM i VMs running in IP4G the same way they access IBM i systems running on-premises. Network traffic directed to IP4G VMs normally routes over any of the available connectivity methods. However, if network connectivity has not been completed, use the following procedures to gain access.

Requirements:

  • IP connectivity to IP4G VM
  • IBM i Access Client Solutions (iACS) installed per IBM Documentation

Configuring port forwarding

Use any 5250 emulator to access IP4G VMs using SSH tunelling to forward port 23. IBM i Access Client Solutions (iACS) requires forwarding several other ports for licensing and other system administrative functions. By default the majority of the required ports are blocked by IP4G and Google Cloud firewalls. Leverage SSH tunneling to forward these ports to a local workstation and gain access.

First, start the required TCP/IP servers on the VM:

  • SSH - For remote logins
STRTCPSVR SERVER(*SSH)
  • ADMIN HTTP server - IBM i Navigator & Digital Certificate Manager
STRTCPSVR SERVER(*HTTP) HTTPSVR(*ADMIN)
  • Telnet - Remote TN5250 sessions
STRTCPSVR SERVER(*TELNET)

The required ports to forward are:

  • 23
  • 2001
  • 2005
  • 449
  • 8470-8476

Configuring port forwarding under macOS or Linux

If using a Mac or Linux system, use the following command or similar:

ssh -L 50000:localhost:23 -L 2001:localhost:2001 -L 2005:localhost:2005 \
-L 449:localhost:449 -L 8470:localhost:8470 -L 8471:localhost:8471 \
-L 8472:localhost:8472 -L 8473:localhost:8473 -L 8474:localhost:8474 \
-L 8475:localhost:8475 -L 8476:localhost:8476 -o ExitOnForwardFailure=yes \
-o ServerAliveInterval=15 -o ServerAliveCountMax=3 <user>@<ipaddress>

Where is QSECOFR or another user created on the target VM, and is the IP address of the IP4G VM.

Configuring port forwarding under Windows using PuTTY

If using a Windows system, you can use the free PuTTY utility.

Launch PuTTY. Under Session, fill in the Host Name (or IP address) field. Use the public IP address of the IBM i VM in IP4G. For Connection type, select SSH.

Next, in the left side navigation pane, expand the Connection tree. Then expand the SSH tree. Within that tree, click on Tunnels. On that screen:

  • Check “Local ports accept connections from other hosts”
  • Check “Remote ports do the same (SSH-2 only)”

Next, add and properly set the ports from the required port list above. Those ports, 23, 2001, 2005, 449, and 8470-8476 each need added. For each port:

  • Enter the port number into the Source port field.
  • Set Destination to “localhost:”.
  • Click Add.
  • Repeat these steps until all of the required ports are added.
  • For destination port 23, the Source port should be set to 50000.

Click on Session in the left navigation window. Give the just completed configuration a name, and click Save. This will prevent having to perform the previous steps again for this VM.

At the bottom of the PuTTY Configuration window, click Open. This starts the PuTTY session and begins port forwarding. A prompt to accept the remote system key on first use will appear. Click Accept. Then, log in using QSECOFR or another configured user.

Configuring iACS to use forwarded ports

Next, configure iACS to use forwarded ports. Do this only after port forwarding has been configured and started.

Create a new 5250 session in iACS.

  • Use localhost, or 127.0.0.1 for the destination address.
  • Set the Destination Port to 50000.

Click OK and connect to the system.

Additionally, IBM i Navigator can be accessed through the following URL:

https://127.0.0.1:2005/ibm/console/login.do?action=secure

3.3.2 - Accessing the IBM i Console

This article explains two ways to access the IBM i console. The first method is through the IBM Power for Google Cloud (IP4G) user interface. The second method is through a LAN console.

IP4G web console

Browse to an IBM i instance from the VM Instances list. Then, click on the Console button in the actions toolbar:

![Console Button](/images/how-to/ibmi/ibmi-console-access/console-button.png

A VNC window connected to the console session will open.

Console window

Use the tool bar at the bottom to access the Extended Function keys, F13 through F24. Some users may need to scroll the VNC window down to see them.

Console Toolbar

The shift key does not always work for the Extended Function keys. Click the “Next P…” button to display the Extended Function Keys.

The web console will time out occasionally. A session can be re-established by refreshing the IP4G web interface page.

LAN Console

Optionally, set up a LAN console by adding an additional network interface to the Virtual Machine (VM). This can be done via the pcloud command line.

Obtain a list of the available networks:

pcloud compute networks list
NetworkID                             Name                 VLANID
2c45110a-2a33-4880-a90d-000000000000  test-network-1       334
932cda0c-6cc7-4a5a-93f2-000000000000  test-network-2       111
fcc506f3-70e2-45a2-9ee3-000000000000  test-network-3       20

Obtain a list of the available VM instances:

pcloud compute instances list
NetworkID                             Name                 VLANID
...
36e0b4ac-1e63-410d-97f2-000000000000  tst-ibmi72      ACTIVE   OK      [10.3.4.116]
...

Attach a second network interface to the VM. Specify the network name and the VM instance name obtained above.

pcloud compute instances attach-network tst-ibmi72 --network test-network-1

When this command executes, IP4G assigns the network and selects an IP from the pool. Use this IP for the Service Tools LAN Adapter. However, it will need assigned manually. Use the following command to view the new network:

pcloud compute instances describe tst-ibmi72
instanceID: 36e0b4ac-1e63-410d-97f2-000000000000
name: tst-ibmi72
...
systemType: s922
cores: 1
procType: dedicated
memory: 8
pinPolicy: none
status: ACTIVE
...
networks:
- ipAddress: 10.3.4.142
  macAddress: 00:00:00:00:00:00
  networkName: test-network-1
...

Per the example, the new network interface uses the IP address 10.3.4.142.

Use the following command to determine the resource name of the new interface:

WRKHDWRSC *CMN

CMN Resources

In this example, the new adapter is CMN05. This is the device to use in Service Tools. When added via the pcloud command, cloud-init may create a line description for the new device. Check this with the WRKLIND command. If needed, vary off and delete the new line description if one was created.

Start Service Tools:

Service Tools

Choose Option 8, “Work with Service Tools Server Security and Devices”:

Service Tools Devices

Press F13 (Select STS LAN adapter). A list of the available resources will be displayed:

SST LAN

Choose Option 1, then press Enter to go to the configuration panel. Enter the IP address recorded earlier. Then, enter the matching netmask from the IP4G network configuration:

SST LAN Configuration

Press F7 to store the configuration. Then press F17 to de-activate/activate the adapter. It should then be possible to ping the IP address of the adapter. Use this as the console address in IBM i Access Client Solutions:

Service Host Name

Once configured, use the “5250 Console” link in IBM i Access Client Solutions to access the console. Optionally, take over the console if prompted. Use the default user account to connect. The default user account uses “11111111” as the user name and password. Alternately, use an account created in Service Tools.

Takeover Console

3.3.3 - Preparing systems for migration

Use the following prerequisites and recommendations to ensure a successful migration to IBM Power for Google Cloud (IP4G).

For IBM i systems, ensure that the operating systems are running the following minimum versions:

IBM i versionMinimum PTF LevelNotes
7.2TR8Only on Power 9 systems. 7.2 has been withdrawn from support.
7.3TR12
7.4TR6
7.5Supported in base release.

Additional software requirements:

  • Ensure PASE and Open Source environment is installed and boostrapped.
  • Ensure cloud-init is installed. It is required to support IBM i OS licensing functions. Be aware, there are several prerequisites to install for cloud-init.

Additional recommendations:

3.4 - Linux How-To Documents

3.4.1 - Preparing systems for migration

Use the following prerequisites and recommendations to ensure a successful migration to IBM Power for Google Cloud (IP4G).

For Power Linux systems, ensure that the operating systems are running the following minimum versions:

Linux DistributionMinimum ReleaseNotes
Red Hat Enterprise Linux8.4 for Power LEEarlier releases might run on Power 9 systems only
9.0 for Power LEEarlier releases might run on Power 9 systems only
SuSE Linux Enterprise Server15 Service Pack 3Earlier releases might run on Power 9 systems only
Ubuntu Server22.04Earlier releases might run on Power 9 systems only

Additional recommendations:

  • Install cloud-init. This can be done using dnf, yum, or RPM. There are several prerequisites to install for the cloud-init. The cloud-init software package is required to leverage some features. Those features include:
    • Image capture and deploy.
    • Automatic configuration of IP addresses, through the IP4G interface.

4 - IP4G CLI (pcloud)

The pcloud command line tool allows you to create and manage the lifecycle of VMs on your IBM Power for Google Cloud instance. You can use the pcloud command line tool to manage related resources such as VM images and data volumes. The pcloud command line tool is a stand alone executable file that is available for use on Mac, Windows, and Linux operating systems. You can obtain the latest pcloud binaries from the cli download site.

4.1 - Authentication in the CLI

Before you can start using the pcloud tool, you must have an available Power Cloud instance that is provisioned and attached to a selected GCP project within your organization. This process occurs when you subscribe to the offering in the GCP Marketplace. Next, run the pcloud auth login command to authenticate to your cloud instance. When you authenticate, you must use the registered Google identity that is associated with the organization and the billing account that is used in the subscription. Running the pcloud auth login command provides you with a unique code to register your pcloud instance. In the following example, XXXX-YYYY represents the unique code.

pcloud auth login
To authorize pcloud you will need to complete following steps (within 30 minutes):
  1. Navigate to: https://www.google.com/device
  2. Enter the code: XXXX-YYYY
  3. Select your Google ID that was registered with IBM Power Systems for Google Cloud


Once these steps have been completed the login command will complete within 5 seconds

When you go to https://www.google.com/device, you must enter the code in the dialog box. Next, a popup is displayed and you select the appropriate identity. After you complete this process, pcloud is now enabled to operate against the Power Systems Cloud instance that is associated with your Google identity. To verify that pcloud is enabled, run the pcloud config list, pcloud compute clouds list, and pcloud compute clouds describe commands. The following example displays the correct sequence of running these commands.

# pcloud config list
accountID: test
cloudID: 7f16fae4f3f54d8bb62f75645db56905
cloudName: test-us-east4
region: us-east4

# pcloud compute clouds list
CloudID                           Name              Region       Current
7f16fae4f3f54d8bb62f75645db56905  test-us-east4     us-east4     true
7f8e1c3032484fa390d7e87d329afdb1  test-us-central1  us-central1  false

# pcloud compute clouds describe test-us-east4
cloudID: 7f16fae4f3f54d8bb62f75645db56905
name: test-us-east4
accountID: test
ibmCustomerNumber: "1234"
region: us-east4
storageTypes: ssd, standard
defaultStorageType: standard
usage:
  memory: 152
  cores: 21.5
  storage: 3.378
  instances: 7
limits:
  memory: 640
  cores: 70
  storage: 50
  instances: 40
  peeringNetworks: 1
  peeringBandwidth: 1000
limitsPerVM:
  memory: 400
  cores: 15

4.2 - CLI Documentation

You can use the –help option with all subcommands to get additional information and available options for each of the operations. You can run the pcloud docs command to generate an up-to-date collection of documentation in markdown formatted (.md) files.

By default, the markdown files are placed in a newly created pcloud-doc directory.

You can use the -d option with pcloud docs to specify an alternate directory that will be created.

If pcloud-doc or the alternate directory already exists, it must be renamed or removed before running pcloud docs.

To view the markdown files, you can use a browser plug-in or other available tools. For example, you can use the Markdown Viewer plug-in for the Chrome browser.

The pcloud_main.md file is the top level document with links to specific topics.

4.3 - Adding Images to the Image Catalog

Before you can create your first VM on your Power cloud instance, you must have a VM image in your cloud instance catalog. IBM provides a few stock images. These stock images are ideal for creating your first VM. The service also supports options to bring your own images.

To view available stock images, run the pcloud compute images list -a command. The following example displays the output from running this command:

pcloud compute images list -a
ImageID                               Name
9f72fc9b-a8b9-4d50-ad2d-65564a80b6d8  7100-05-04
0d50200b-c1b7-41fc-a858-7d3627714384  7200-03-03

The previous command without the -a flag can be used to list the images that are currently in your image catalog.

pcloud compute images list
ImageID                               Name
a232f02c-f041-480a-8b5a-87058710e2b3  7200-03-03

To copy a stock image into your cloud catalog, run the pcloud compute images create command. The following example uses the AIX 7100-05-04 stock image.

pcloud compute images create 9f72fc9b-a8b9-4d50-ad2d-65564a80b6d8

Now that you created the 7100-05-04 image, it is part of the cloud instance catalog and can be used to deploy VMs.

4.4 - Adding Networks in the CLI

The pcloud command line interface provides functionality to add a network interface to a virtual machine.

The command to add the additional network interface is as follows:

pcloud compute instances attach-network <InstanceName> --network <NetworkName[:IPAddress]> [flags]

Flags

FlagDescription
h, –helpHelp for attach-network
-n, –network stringNetwork to attach to the VM Instance value must follow the <NetworkName[:IPAddress]> form.
Note that “NetworkID” can be provided instead of “NetworkName”, “IPAddress” is optional.
Required.
Global FlagsDescription
-F, –format stringAvailable formats: “table”, “yaml”, “json”, “csv”. Default is command specific. Can be used with describe and list subcommands.
-D, –log dir stringOverride Log file Directory
-L, –log file stringOverride Log File Name
-V, –verbosity stringOverride Log Verbosity

After a new interface has been created, the IP address will not automatically be assigned to the interface in the OS. Use the standard procedure for the operating system running in the VM to assign the IP address to the interface and bring it up.

4.5 - VM Creation Using the CLI

After you have an image and network, you can create a VM by running the pcloud compute instances create command. The following is an example of the output from using the -h flag for this command:

# pcloud compute instances create -h
Create a VM Instance in IBM Power Cloud.

 Most of the flags have default values, but you must enter the VM name to be created, and mandatory image and network names.
 You can list the images of your Cloud by running the "pcloud compute images list" command.
 You can list the available networks by running the "pcloud compute networks list" command.

 Creating a new VM instance can take a few minutes to fully complete. You can look at the state
 of the VM by using the "pcloud compute instances describe" subcommand.

Usage:
  pcloud compute instances create <InstanceName> --image <ImageName> --network <NetworkName[:IPAddress]> [flags]

Flags:
  -a, --affinityPolicy string   Affinity policy for replicants being created (must be one of {"affinity", "anti-affinity", "none"}) (default "none")
  -c, --cores float             Number of cores to allocate to the VM Instance (default 2)
  -h, --help                    help for create
  -i, --image string            Image to allocate to the VM Instance (note that 'ImageID' or 'ImageName' can be used) (required)
  -k, --keypair string          SSHKeyPair Name
  -m, --memory float            Memory size (in GB) to allocate to the VM Instance (default 4)
  -s, --namingScheme string     Replicants naming scheme (must be one of {"prefix", "suffix"}) (default "suffix")
  -n, --network strings         Networks to assign to the VM Instance, (values must follow the <NetworkName[:IPAddress]> form, note that 'NetworkID' can be provided instead of 'NetworkName', 'IPAddress' is optional, several --network allowed) (required)
  -P, --pinPolicy string        VM pinning policy (must be one of {"none", "soft", "hard"}) (default "none")
  -t, --proctype string         Dedicated, Shared or Capped Processing Type (must be one of {"dedicated", "shared", "capped"}) (default "dedicated")
  -r, --replicants float        Number of replicants (default 1)
  -T, --storagetype string      Storage type for VMInstance deployment (required only if image is a "stock" image - default value set).
                                Storage type depends on the region of your cloud and can only be set from a short list of available values.
                                You can list the available storage types and the default storage type for your cloud by running the "pcloud compute clouds describe" command.
  -u, --userdata string         Cloud-init user defined data file (optional, if necessary data are uuencoded before being sent)
  -v, --volume strings          Volumes to assign to the VM Instance (several --volume allowed, note that 'VolumeID' or 'VolumeName' can be used)

Global Flags:
  -F, --format string      Available formats: 'table', 'yaml', 'json', 'csv'.
                            Default is command specific.
                            Can be used with describe and list subcommands.
  -D, --log.dir string     Override Log file directory
  -L, --log.file string    Override Log file name
  -V, --verbosity string   Override Log verbosity

Some of the flags have default values, but you must enter several parameters. For example, you must enter the VM image name and the network ID.

You can list the available networks by running the pcloud compute networks list command. A cloud instance has at least one network that was created when it was initially provisioned. This network is what connects the cloud instance and its VMs to GCP.

If you run the command in the following example, you create a new AIX 7.2 VM with two entitled shared cores of compute and 8 GB of RAM.

pcloud compute instances create mytestvm -i 7200-03-03 -m 8 -c 2 -t shared -n gcp-network

Creating a new VM instance can take a few minutes to fully complete. You can look at the state of the VM by using the describe subcommand. In the following example, the VM is still booting up and initializing.

# pcloud compute instances describe test-vm10
instanceID: 4f99e4c5-b8e4-4751-80a8-dd02ccbb00d0
name: test-vm10
cloudID: 7f16fae4f3f54d8bb62f75645db56905
systemType: s922
cores: 2
procType: shared
memory: 24
pinPolicy: none
status: ACTIVE
health:
  status: OK
  lastUpdate: 2021-04-22T17:23:12.354988
  reason: '-'
fault:
  code: 0
  created: '-'
  details: '-'
  message: '-'
imageID: 91e656a6-a9a1-4a99-abb9-7bcb366c3546
networks:
- ipAddress: 192.168.0.17
  macAddress: fa:b8:ef:cf:4b:20
  networkName: gcp-network
  networkID: 8e72b5cc-9e50-4b06-bc56-eb4e1781eefe
volumes:
- name: test-vm10-4f99e4c5-00000ad1-boot-0
  ID: bc141301-85a2-461e-8c68-1f0bc67a35c4
  storageType: standard
  size: 20
  shareable: false
  bootable: true
  bootVolume: true
creationDate: "2021-04-22T17:09:37.000Z"
updateDate: "2021-04-22T17:09:37.000Z"
progress: 0

To initially access the new VM, you can use the console subcommand. This subcommand generates a url that can be used in your web browser to access the console of the VM. A recommended first task is to create a password for root using the passwd command so that ssh as root can be used with the VM.

pcloud compute instances console mytestvm


console: https://pforg.converge.cloud/console/index.html?token=<token>

Here is a screen capture of the console:

4.6 - Working with Data Volumes in the CLI

Volume Types

In IP4G there are two volume types. Those volumes are defined by a name (HDD or SDD) and correspond to the following Characteristics:

RegionDescription
US EastSDD and HDD are different hardware types. Performance is dictacted by the corresponding hardware.
All other RegionsSSD and HDD are both IBM Flash Storage, with a QoS limit of 10 iops/GB for SSD, and 3 iops/GB for HDD.

You can see an individual volumes iops limit using

pcloud compute volumes describe <volume name>

Working with additional data volumes

You can create more data volumes to hold application data or support multi-disk use cases with your VMs by using the compute volumes subcommand. The following are examples of using the compute volumes subcommand

To list out existing volumes:

pcloud compute volumes list
VolumeID                              Name                                    Size  StorageType    State      Shareable  Bootable
696226c7-0a97-4917-a220-9bcbe828dab8  mytestvm-7f11d296-000006ac-boot-0       20    standard       in-use     false      true
e2c688bb-962d-413c-ba14-804e9fbb6042  instanceTestN-ef7c9d9d-00000697-boot-0  32    standard       in-use     false      true

To create a new 10 GB SSD volume and display it in the list of volumes:

pcloud compute volumes create my-test-volume -s 10 -t ssd
Volume "my-test-volume" created with ID: 4cd31e0e-eea1-4d3d-8b15-90a272b27bc0


pcloud compute volumes list
VolumeID                              Name                                    Size  StorageType    State      Shareable  Bootable
4cd31e0e-eea1-4d3d-8b15-90a272b27bc0  my-test-volume                          10    ssd            available  false      false
696226c7-0a97-4917-a220-9bcbe828dab8  mytestvm-7f11d296-000006ac-boot-0       20    standard       in-use     false      true
e2c688bb-962d-413c-ba14-804e9fbb6042  instanceTestN-ef7c9d9d-00000697-boot-0  32    standard       in-use     false      true

To show the details of a volume:

# pcloud compute volumes describe test-vm10-4f99e4c5-00000ad1-boot-0
volumeID: bc141301-85a2-461e-8c68-1f0bc67a35c4
name: test-vm10-4f99e4c5-00000ad1-boot-0
cloudID: 7f16fae4f3f54d8bb62f75645db56905
storageType: standard
size: 20
shareable: false
bootable: true
state: in-use
instanceIDs:
- 4f99e4c5-b8e4-4751-80a8-dd02ccbb00d0
creationDate: "2021-04-22T17:09:44.000Z"
updateDate: "2021-04-22T17:10:06.000Z"

The following example shows how to attach the new volume to an existing VM. In the following example, the new my-test-volume is now attached the VM.

pcloud compute instances attach-volume mytestvm -v my-test-volume
"rcb-test-volume" Volume being attached to "mytestvm" VM Instance (complete attach is not immediate)

4.7 - pcloud CLI Release Notes

Version 1.100.0 (2024-12-09)

  • Add listing volume pools using pcloud compute volumes spools
  • Add Volume Pool specification to allow users to specify the target volume pool when using pcloud compute volumes create
  • Add pcloud compute instances dumprestart to enable dumprestart functionality for VM Instances

Version 1.99.11 (2024-09-30)

  • Expand acceptable datetime inputs for pcloud compute events to pcloud compute events list

Version 1.91.0 (2023-12-05)

  • Add WWN to pcloud compute volumes describe

Version 1.5.0 (2023-08-04)

  • Add volume transfer capability with pcloud compute volumes transfer

5 - Security Privacy and Compliance

IBM Power for Google Cloud (IP4G) is designed to provide a robust and secure environment for your mission-critical applications. Security and compliance are top priorities, and we employ a shared responsibility model to ensure your data and workloads are protected. This means that while Google Cloud manages the security of the underlying infrastructure, you are responsible for the security in the cloud, encompassing your operating systems, applications, and data.

5.1 - Shared Responsibility Model for IBM Power for Google Cloud

IBM Power for Google Cloud is an Infrastructure-as-a-Service offering provided by Converge on the Google Cloud Marketplace. It provides compute, storage and network services on demand with a capacity based pricing model and provides high performance, low latency connectivity to Google Cloud services. The Cloud service requires the customer to operate their own Google Cloud Organization and connect to IBM Power for Google Cloud using Google Private Services Access.

IBM Power for Google Cloud segments the service management (control plane) and the data access (data plane) across different endpoints so that neither can impact the other to provide a secure architecture. The IBM Power for Google Cloud control plane consists of the Web Console, pcloud CLI, and API, all managed by Converge. The data plane uses the Google Cloud private services access (PSA) framework to connect the dedicated IBM Power for Google Cloud Instance to a customer Google Organization.

Each IBM Power for Google Cloud customer is allocated a dedicated Service Producer VPC Network and Service Producer Project managed by Converge. Strong tenant isolation is maintained into the IBM Power for Google Cloud infrastructure with isolated L2 and L3 network domains per customer and a multi-tenant compute hypervisor and storage architecture.

Encryption

IBM Power for Google Cloud block storage Volumes are encrypted at rest using AES-256 by default. Data is striped across a distributed array of disks for performance and durability. Encryption keys are managed by IBM Power for Google Cloud and rotated automatically. Customers who would like to manage their own encryption keys must configure operating system or application based encryption in addition to the storage encryption provided by IBM Power for Google Cloud.

The IBM Power for Google Cloud (IP4G) network fabric provides private network connectivity between Virtual Machines in IBM Power for Google Cloud and Google Cloud. All IP4G network traffic traverses physical connections in a Google Cloud Regional Extension datacenter. Network traffic from IBM Power for Google Cloud to Google Cloud traverses a private Google Cloud connection between a Google Cloud Regional Extension data center and Google Cloud. We expect customers to enable secure communication protocols for applications to encrypt data in transit between IP4G and Google Cloud and internal networks in IP4G. All data transferred during Live Partition Mobility is encrypted in transit for IBM Power for Google Cloud.

Shared Responsibility

IBM Power for Google Cloud provides an API, CLI, and Web Console that allows the customer to create, delete and modify the compute, storage and networking of their IBM Power for Google Cloud Instance. The customer must authorize users to access these interfaces and it is the responsibility of the customer to ensure the appropriate Google Cloud Identities are permitted to the customer Cloud Instance.

The customer is responsible for configuring their Google Cloud organization to connect to the service.

As with any Infrastructure as a Service offering, the bulk of security responsibilities are placed on the customer to provision resources in a way that meets their regulatory and compliance requirements. Converge is responsible for the underlying infrastructure and physical security.

Customer Responsibility

Usage associated with IBM Power for Google Cloud Subscription
Operations for virtual machine Instances deployed
Authorization and Authentication to IBM Power for Google Cloud using Google Cloud Identity
Network security for access to virtual machine instances
Guest operating system, data, and content
Deployment of IBM Power for Google Cloud virtual machine instances

Converge Responsibility

Audit logging for IBM Power for Google Cloud platform events
Network isolation and availability
Storage encryption and availability
IBM Power Control Plane and Hypervisor
Hardware (IBM Power Systems, Storage, and Network)
Data Center Power, Cooling, and Security

6 - Additional Information

IP4G FAQ, Availability, Backups, and Terminology

Additional information to assis users in consuming IP4G services.

6.1 - FAQs

Frequently Asked Questions for IBM Power for Google Cloud Platform (IP4G)

Frequently Asked Questions for IBM Power for Google Cloud Platform (IP4G)

6.1.1 - General FAQs

Frequently Asked Questions for IBM Power for Google Cloud Platform (IP4G)

General FAQ

What are the SLAs for Support?

Support for IBM Power for Google Cloud is delivered via the Google Collaborative Support model.

An IBM Power for Google Cloud customer must have Google Premium or Enhanced support contacts for Support SLAs. The Customer will contact Google for support and SLAs will align to the Support contact. For more information, see Google Cloud Support.

Additional details about Google Support can be found in the Google Technical Support Services Guidelines.

What Google Cloud regions are supported?

IBM Power for Google Cloud is available in the following regions.

  1. us-east4 (N. Virginia)
  2. us-central1 (Iowa)
  3. europe-west3 (Frankfurt)
  4. europe-west4 (Netherlands)
  5. northamerica-northeast1 (Montréal)
  6. northamerica-northeast2 (Toronto)

We are continously evaluating expansion to new regions based on market demand. Contact power4gcp@googlegroups.com if your workload requires a region that is not listed.

What SLA is available for resizing and restarting Virtual Machines?

IBM Power for Google Cloud provides a self-service interface for interacting with IBM i, RHEL Linux and AIX workloads. The customer can resize and restart virtual machines using the API, Web Console or CLI.

Provisioning SLO for new virtual machines is 20 minutes until the system is ready for access via ssh.

What is the maximum Virtual Machine size?

Default core limits are identified by region in the table below. Some limits can be increased, contact sales using the Marketplace Page or open a support request if you require increased limits.

RegionMax Core Count Per VMMax RAM Per VM
us-east415 (Power 9)
21 (Power 10)
400GB (Power 9)
1024GB (Power 10)
us-central115 (Power 9)400GB (Power 9)
europe-west315 (Power 9)
21 (Power 10)
400GB (Power 9)
1024GB (Power 10)
europe-west421 (Power 10)976GB (Power 10)
northamerica-northeast121 (Power 10)1024GB (Power10)
northamerica-northeast221 (Power 10)1024GB (Power10)

What is the maximum bandwidth available between virtual machines on the same host vs different hosts?

The maximum bandwidth between two Power Virtual Machines on two different hosts is up to 10Gb per second. However, that can be affected by the VM sizing, OS configuration and application. Maximum bandwidth between two Power Virtual Machines on the same host is expected to be greater. We recommend deploying workloads and validating performance if reaching a platform performance maximum is critical.

What is the maximum Core and RAM that can be allocated to an affinity group?

The total resources allocated to an affinity group cannot exceed the maximum core and RAM for a single Virtual Machine.

Is the customer notified when a Virtual Machine will be moved to another Power Host?

Yes. During onboarding the customer provides contacts for maintenance notifications. Converge will notify the customer contact via e-mail for scheduled or emergency maintenance.

  • For Scheduled Maintenance, a notification will be sent to the customer two weeks prior to scheduled maintenance.
  • For Emergency Maintenance, a notification will be sent when the Virtual Machine is being moved. Emergency maintenance is only performed when Power Host health is severely impacting Virtual Machine availability.

What do soft and hard pinning mean?

By default VMs are not pinned when created. You can optionally specify hard or soft pinning when creating a VM. Virtual machines with Pin Policy set to “soft” will be live migrated back to the original Host System once the maintenance is complete. Virtual machines with Pin Policy set to “hard” will be powered down during maintenance.

What are the affinity and anti-affinity groups?

The customer can create groups of VMs that should live on the same host or should not live on the same host.

When Soft Pinning is configured, how long will the Virtual Machine be removed from the original Host?

Up to 24 hours. If additional time is expected, a notification will be sent prior to maintenance.

Can you provide a sample of a scheduled maintenance notification?

A sample notification is shown below.

One or more IBM Power for Google Cloud Virtual Machines are running on a host system that requires maintenance. The following virtual machines will be live migrated to a new host system using Live Partition Mobility (LPM).

Impacted Virtual Machine by Name ip4g-vm-1 ip4g-vm-2

Virtual machines with Pin Policy set to “soft” will be live migrated back to the original Host System once the maintenance is complete. Virtual machines with Pin Policy set to “hard” will be powered down during maintenance.

During live migration, your virtual machine might experience a decrease in Volume, Compute, Memory, and Network performance for a short period of time.

This maintenance is required. If you would like to migrate your virtual machine prior to the scheduled maintenance, please open a support ticket to live migrate your virtual machines preemptively.

Required action: No action is required from you. Converge will monitor and ensure restoration of compute, block storage, and networking services.

Recommended actions: Ensure your team is aware of the maintenance and is available to validate any critical applications during and after the maintenance process is complete. Take additional care if your workloads are sensitive to brief loss of performance or connectivity.

Our intent is to strengthen our infrastructure and deliver you the best experience possible. If you have questions or concerns about this scheduled maintenance or If you would like to Live Migrate your virtual machine prior to the scheduled maintenance, please contact Google Customer Care and reference maintenance event XXXX.

Sincerely,

IBM Power for Google Cloud Operations

What security and compliance certifications are in place?

IBM Power for Google Cloud undergoes independent verification of security, privacy, and compliance controls to demonstrate compliance.

Audit reports are available at request via contractscompliance@convergetp.com.

What is the latency to IBM Power for Google Cloud from my datacenter?

Some services are recommended for deploying workloads that require low latency communication to applications running in IP4G. Converge recommends deploying on services such as

  • Google Compute Engine
  • Google Kubernetes Engine

Further, for best results, deploy in the same region. A heterogeneous workload can have latency as low as 1ms between IBM Power for Google Cloud and services like Google Compute engine.

In some cases Google Cloud services are not a good fit for a workload. Instead, a hybrid cloud solution is required. However, low latency to IBM Power for Google Cloud is still important. To build a low latency hybrid cloud solution, the customer must select an appropriate data center to host the non-Google Cloud workload.

Google Cloud provides a list of low latency colocation facilities. They are a subset of all colocation facilities that can provide dedicated Cloud Interconnects. Customers can choose to establish a direct relationship with these low latency colocation facilities. They can achieve less than 5ms of latency to a Google Cloud region. When selecting a colocation facility for low latency connectivity to IBM Power for Google Cloud, Converge recommends the facilities in the table below. They will allow for the lowest possible latency to a IBM Power for Google Cloud region.

Google Cloud RegionDedicated Cloud Interconnect ZoneColocation Facility
us-east4 (Virginia)iad-zone1-1
iad-zone2-1
Equinix Ashburn (DC1-DC11)
us-central1 (Iowa)cbf-zone1-575
cbf-zone2-575
Nebraska data centers (1623 Farnam)
northamerica-northeast1 (Montreal)yul-zone1-99002
yul-zone2-99002
Cologix MTL10-H
northamerica-northeast2 (Toronto)yyz-zone1-2206
yyz-zone2-2206
Equinix Toronto (TR2)
europe-west3 (Germany)fra-zone1-58
fra-zone2-58
Interxion Frankfurt
europe-west4 (Netherlands)grq-zone1-532
grq-zone2-532
QTS Netherlands

Why does the hostname revert after being changed?

The following applies to both AIX and Linux. To prevent cloud-init from resetting the hostname, edit the cloud-init configuration file. This is usually /opt/freeware/etc/cloud/cloud.cfg.

Set the parameter “preserve_hostname” to false: preserve_hostname: false

Volume Types

In IP4G there are two volume types. Those volumes are defined by a name (HDD or SDD) and correspond to the following Characteristics:

RegionDescription
US EastSDD and HDD are different hardware types. Performance is dictacted by the corresponding hardware.
All other RegionsSSD and HDD are both IBM Flash Storage, with a QoS limit of 10 iops/GB for SSD, and 3 iops/GB for HDD.

You can see an individual volumes iops limit using

pcloud compute volumes describe <volume name>

6.1.2 - AIX FAQs

Frequently Asked Questions about AIX for IBM Power for Google Cloud Platform (IP4G)

AIX

The following FAQs are related to the AIX operating system.

What versions of AIX are supported?

We follow IBM current supported versions.

The following AIX versions are supported:

  • AIX 7.3 and higher (Recommended)
  • AIX 7.2 TL4 SP1 or higher (Minimum Recommended)
  • AIX 7.1 TL5 SP9 is available in extended support (since in April 2023)
  • Extended support for prior versions is not currently available. A customer can choose to run legacy versions of AIX. However, no SLA is provided and support is not available.

Can IP4G run IBM AIX Workload Partitions?

Yes. Supported operating systems include any supported level of AIX.

Can IP4G run IBM AIX 5.3 Versioned Workload Partitions?

AIX 5.3 can run as a Versioned Workload Partition (Version WPAR) using an AIX 7.2 host system in IP4G. This configuration is up to the customer to complete. However, the following guidelines should be considered.

  • Required Operating Systems: AIX 5.3 TL12 SP5.
  • AIX 5.3 and Versioned WPARs are no longer supported by IBM.
  • Converge can only provide best-effort support.
  • Versioned WPAR filesets are available via IBM RPQ P91337.
  • AIX 5.3 filesets are only available through IBM.
  • PowerHA cannot run within the Versioned WPAR.
  • The Versioned WPAR may be able to run as an application under PowerHA. Please see the IBM PowerHA SystemMirror for AIX Cookbook, section 13.2.5, Planning for a versioned WPAR.

6.1.3 - IBM iFAQs

Frequently Asked Questions about IBM i on the IBM Power for Google Cloud Platform (IP4G)

IBM i

The following FAQs are related to the IBM i operating system.

What IBM i versions are supported?

IBM Power for Google Cloud uses POWER9 and POWER10 hardware. It supports the following versions of IBM i:

  • 7.2 TR8
  • 7.3 TR4
  • 7.4
  • 7.5

What options are available for upgrades and patching?

The customer is responsible for IBM i system maintenance including upgrades and patching. Converge can provide additional managed services for IBM i. Possible services include managed backup, basic OS updating and patching, and complete managed services. Please contact a Converge representative for more information.

What backup solutions are available for IBM i?

Reference architectures and managed services for IBM i backup on IBM Power for Google Cloud are in development. Currently, any agent-based solution would be technically viable as long as it can support the customer’s backup requirements. Customers should consider an IP-based VTL solution hosted in GCE as the recommended approach to IBM I backups in IP4G.

What Licensing Tiers are available?

We offer P10 (all regions), P20 (select regions) and P30 (select regions) licensing tiers for IBM i.

What OS features and LPPs are available?

OS Features and LPPs are available as follows:

OS Features

  • 5770-DG1: HTTP Server for i
  • 5770-JV1: Developer Kit for Java
  • 5770-NAE: Network Authentication Enablement for i
  • 5733-SC1: Portable Utilities for i
  • 5770-TC1: TCP/IP
  • 5770-TS1: Transform Services for i
  • 5770-UME: Universal Manageability Enablement for i
  • 5770-XE1: IBM i Access for Windows
  • Zend Community Edition
  • 5733-ARE: IBM Administration Runtime Expert
  • 5798-FAX: IBM Facsimile Support for i
  • 5770-SM1: IBM System Manager for i
  • 5770-DFH: IBM CICS Transaction Server for i
  • 5770-MG1: IBM Managed System Services for i
  • 5770-SS1: IBM i Option 23, OptiConnect
  • Any other no-cost OS feature from IBM

Licensed Program Products

  • 5770-SS1: IBM i Option 18 Media & Storage Extensions
  • 5770-SS1: IBM i Option 26 DB2 Symmetric Multiprocessing
  • 5770-SS1: IBM i Option 27 DB2 Multisystem
  • 5770-SS1: IBM i Option 38 PSF for IBM i Any Speed Printer Support
  • 5770-SS1: IBM i Option 41 HA Switchable Resources
  • 5770-SS1: IBM i Option 42 HA Journal Performance
  • 5761-AMT: Rational Application Management Toolset
  • 5770-AP1: Advanced DBCS Printer Support
  • 5733-B45: AFP Font Collection for it
  • 5770-BR1: Backup, Recovery and Media Services
  • 5761-DB1: System/38 Utilities
  • 5761-CM1: Communications Utilities
  • 5761-DS2: Business Graphics Utility
  • 5648-E77: InfoPrint Fonts
  • 5769-FN1: AFP DBCS Fonts
  • 5769-FNT: AFP Fonts
  • 5733-FXD: Integrated Domino Fax
  • 5770-PT1: Performance Tools
  • 5770-QU1: Query for i
  • 5770-ST1: DB2 Query Manager and SQL Dev Kit for i
  • 5733-XT2: XML Toolkit
  • 5770-XW1: IBM i Access Family
  • 5770-MC1: IBM Cloud Storage Solutions

Rational Developer Pack

This is not part of the standard LPP package, it is an add-on.

5770-WDS: Rational Development Studio for i

  • ILE compilers
  • Heritage compilers
  • Application Development Toolset (ADTS)

5733-RDW: Rational Developer for i

  • RPG Tools, basic user
  • COBOL Tools, basic user

6.1.4 - Linux FAQs

Frequently Asked Questions about Linux on the IBM Power for Google Cloud Platform (IP4G)

Linux

The following FAQs are related to Linux operating systems.

What versions of Linux are supported?

We follow IBM current supported versions.

The following Linux versions are supported:

  • RHEL LE 7.8
  • RHEL LE 7.9
  • RHEL LE 8.4 and later
  • RHEL LE 9.0
  • SLES LE 12 and 15
  • SLES LE SLES 15 SP2 and SLES 15 SP3
  • Ubuntu LE 18.04

6.2 - Availability

Availability for IP4G

Availability for IBM Power for Google Cloud (IP4G) is based on region and available hardware.

Region

IP4G is available in the following regions:

Hardware - Council Bluffs (US-Central01)

Hardware available in Council Bluffs is as follows:

Server(s)AIX SupportIBM i SupportLinux Support
S922 (9009-22A)
  • 7.1
  • 7.2
  • 7.3
  • 7.2 TR8
  • 7.3 TR4
  • 7.4
  • 7.5
  • RHEL 7.8(le)
  • RHEL 7.9(le)
  • RHEL 8.4+
  • RHEL 9.0
  • SLES 12
  • SLES 15
  • SLES 15 SP2
  • SLES 15 SP3
  • Ubuntu 18.04
E950 (9040-MR9)
  • 7.1
  • 7.2
  • 7.3
Not Applicable
  • RHEL 7.8(le)
  • RHEL 7.9(le)
  • RHEL 8.4+
  • RHEL 9.0
  • SLES 12
  • SLES 15
  • SLES 15 SP2
  • SLES 15 SP3
  • Ubuntu 18.04
S1024 (9105-42A)
  • 7.1
  • 7.2
  • 7.3
  • 7.3 T12
  • 7.4 TR6
  • 7.5 TR1
  • RHEL 8.4+
  • RHEL 9.0
  • SLES 15
  • SLES 15 SP3

Hardware - Northern Virginia (US-East04)

Hardware available in Northern Virginia is as follows:

Server(s)AIX SupportIBM i SupportLinux Support
S922 (9009-22A)
  • 7.1
  • 7.2
  • 7.3
  • 7.2 TR8
  • 7.3 TR4
  • 7.4
  • 7.5
  • RHEL 7.8(le)
  • RHEL 7.9(le)
  • RHEL 8.4+
  • RHEL 9.0
  • SLES 12
  • SLES 15
  • SLES 15 SP2
  • SLES 15 SP3
  • Ubuntu 18.04
S1024 (9105-42A)
  • 7.1
  • 7.2
  • 7.3
  • 7.3 T12
  • 7.4 TR6
  • 7.5 TR1
  • RHEL 8.4+
  • RHEL 9.0
  • SLES 15
  • SLES 15 SP3

Hardware - Frankfurt (Europe-West3)

Hardware available in Frankfurt is as follows:

Server(s)AIX SupportIBM i SupportLinux Support
S922 (9009-22G)
  • 7.1
  • 7.2
  • 7.3
  • 7.2 TR8
  • 7.3 TR4
  • 7.4
  • 7.5
  • RHEL 7.8(le)
  • RHEL 7.9(le)
  • RHEL 8.4+
  • RHEL 9.0
  • SLES 11
  • SLES 12
  • SLES 15
  • SLES 15 SP2
  • SLES 15 SP3
  • Ubuntu 18.04
E950 (9040-MR9)
  • 7.1
  • 7.2
  • 7.3
Not Applicable
  • RHEL 7.8(le)
  • RHEL 7.9(le)
  • RHEL 8.4+
  • RHEL 9.0
  • SLES 12
  • SLES 15
  • SLES 15 SP2
  • SLES 15 SP3
  • Ubuntu 18.04
S1024 (9105-42A)
  • 7.1
  • 7.2
  • 7.3
  • 7.3 T12
  • 7.4 TR6
  • 7.5 TR1
  • RHEL 8.4+
  • RHEL 9.0
  • SLES 15
  • SLES 15 SP3

Hardware - Netherlands (Europe-West4)

Hardware available in the Netherlands is as follows:

Server(s)AIX SupportIBM i SupportLinux Support
S1022 (9105-22A)
  • 7.1
  • 7.2
  • 7.3
  • 7.3 T12
  • 7.4 TR6
  • 7.5 TR1
  • RHEL 8.4+
  • RHEL 9.0
  • SLES 15
  • SLES 15 SP3

Hardware - Montreal (NorthAmerica-NorthEast1)

Hardware available in Montreal is as follows:

Server(s)AIX SupportIBM i SupportLinux Support
S1022 (9105-22A)
  • 7.1
  • 7.2
  • 7.3
  • 7.3 T12
  • 7.4 TR6
  • 7.5 TR1
  • RHEL 8.4+
  • RHEL 9.0
  • SLES 15
  • SLES 15 SP3
S1024 (9105-42A)
  • 7.1
  • 7.2
  • 7.3
  • 7.3 T12
  • 7.4 TR6
  • 7.5 TR1
  • RHEL 8.4+
  • RHEL 9.0
  • SLES 15
  • SLES 15 SP3

Hardware - Toronto (NorthAmerica-NorthEast2)

Hardware available in Toronto is as follows:

Server(s)AIX SupportIBM i SupportLinux Support
S1022 (9105-22A)
  • 7.1
  • 7.2
  • 7.3
  • 7.3 T12
  • 7.4 TR6
  • 7.5 TR1
  • RHEL 8.4+
  • RHEL 9.0
  • SLES 15
  • SLES 15 SP3
S1024 (9105-42A)
  • 7.1
  • 7.2
  • 7.3
  • 7.3 T12
  • 7.4 TR6
  • 7.5 TR1
  • RHEL 8.4+
  • RHEL 9.0
  • SLES 15
  • SLES 15 SP3

Operating Systems

Licensing and support will vary according to the operating system used.

Operating System - AIX

AIX licensing and support is provided as part of the service. Clients are not able to migrate or use any owned AIX licenses on an IP4G Virtual Machine.

Operating System - IBM i

IBM i licensing and support is provided as part of the service. Clients are not able to migrate or use any owned IBM i Operating System or IBM i Licensed Program Products licenses on an IP4G Virtual Machine.

Operating System - Linux

Linux licensing is not provided as part of the service. Clients must acquire or bring their own Linux license for all Linux IP4G Virtual Machines.

6.3 - Backup Solutions

Backup recommendations for IP4G virtual machines

IBM Power for Google Cloud (IP4G) offers several possible strategies for backup. Many solutions are similar to those used in on-premises environments. Others are more cloud-specific. Below are some recommended options. This is not meant to be an exhaustive list.

Capture and Export

Capture and Export can take virtual machine (VM) images and store them in Google Cloud Storage. This works for all supported operating systems. Capture and Export leverages FlashCopy on the backing storage. Captured images are available from the interface for exporting to Google Cloud Storage, or cloning to additional virtual machines (VMs).

Exporting images is a resource-intensive operation. Only one Capture and Export operation can be queued at a time. As a best-practice, only capture images containing a rootvg for AIX/Linux. Or, just the load source volume for IBM i. The maximum image size is 10TB.

See [Capturing and exporting a virtual machine)[capture-and-export.md] for more information.

AIX backup solutions

Almost any agent-based backup solution supporting AIX can be used in IP4G. The backup server or proxy should be implemented within IP4G or GCE. Some common solutions include:

Several solutions can also integrate directly with Google Cloud Storage. Customers are responsible for implementing, maintaining, and managing their own backup environment.

Consider if a shorter backup window or larger backup set is required. If so, deploy backup infrastructure within IP4G. Then, replicate backups to GCE or elsewhere, using native methods within the individual backup solution. Keep in mind, implementing and testing backup architecture is a customer responsibility.

Managed backup services can also be utilized.

IBM i backup solutions

For small IBM i VMs, IBM recommends IBM Cloud Storage Solutions for i (ICC). Larger VMs may want to implement an iSCSI-attached VTL solution. Most customers will want to leverage Backup, Recovery, and Media Services (BRMS) as a backup tool. All of the recommended solutions will integrate fully with BRMS. Managed backup services can also be utilized.

IBM Cloud Storage Solutions for i

This solution integrates with BRMS and Google Cloud Storage using the S3 protocol. It can manage tiering of local backups to Google Cloud Storage. This solution involves backup to virtual optical volumes. This may require additional storage for optical volumes on the VM, until they are offloaded to Google Cloud Storage.

Key Prerequisites:

  • Install and configure IBM Cloud Storage Solutions for i Licensed Program Product (5733-ICC).
  • Install and configure Digital Certificate Manager.
  • Obtain and install certificates for Google Cloud storage.
  • Aproximately 2x used disk capacity to hold virtual optical images during backup.
  • BRMS integration configured.

See the ICC Quick Start Guide for more information: ICC Quick Start.

iSCSI VTL

IBM i also supports backup to an iSCSI-attached Virtual Tape Library (VTL). Customers leveraging on-premises tape or VTLs may find this solution offers the greatest flexibility and familiarity. A VTL solution will most likely also be preferable for VMs larger than 1-2TB.

VTL solutions that offer iSCSI connectivity to IBM i include:

Converge has tested the QUADStor solution. QUADStor is a free software package that can be installed on most Linux distributions. QUADStor supports both x86 and Power architectures. It can be deployed in IP4G as well as GCE. Furthermore, it can be configured to tier to Google Cloud Storage under the control of BRMS. QUADStor offers paid support options for production instances.

QUADStor prerequisites

Prerequsites for use of QUADStor are as follows:

  • Supported Linux distribution (x86 or Power).
  • 8 cores.
  • 64 GB RAM.
  • Sufficient storage based on current backup policies.
  • Latest IBM i PTFs installed for iSCSI Support. See IBM i Removable Media: Support for iSCSI VTL. Note that the linked document was written for FalconStor. However, the PTF requirements apply to QUADStor as well.

Converge has measured aproximately a 4:1 data reduction ratio when using QUADStor in deduplication mode.

FalconStor prerequisites

Customers are recommended to contact FalconStor directly for implementation requirements.

Linux backup solutions

Most of the same solutions that work for AIX will work with Linux as well. Additionally, there may be open source solutions that can meet customer backup requirements. Managed backup servies can also be leveraged.

Managed backup services

IP4G is an Infrastructure as a Platform (IaaS) solution. It does not include managed backup as part of the offering. Converge can offer managed backup services and solutions outside of the IaaS subscription. Please contact a Converge representative for more information.

6.4 - Terminology

Terminology for IP4G

List of Terms

The following is a list of terms used within the IP4G offering.

  • Active Node Halt Policy (ANHP) When a cluster split occurs, this policy ensures that only one node is operating. It stops a previously active LPAR hosting an application. Then, makes sure the LPAR has quieted, before the application is brought online again with a standby LPAR.
  • Affinity Policy IP4G policy used to determine if two IP4G Virtual Machines can exist on the same host. Can also be created to determine if the storage of two virtual machines can exist on the same Storage Area Network (SAN).
  • AIX Acronym. Stands for Advanced Interactive eXecutive. A UNIX based operating system from IBM.
  • Amazon Web Services (AWS) A service from Amazon that provides on-demand cloud computer platforms and Application Programming Interfaces (API).
  • ANHP Acronym. Stands for Active Node Halt Policy. When a cluster split occurs, this policy ensures that only one node is operating. It stops a previously active LPAR hosting an application. Then, makes sure the LPAR has quieted, before the application is brought online again with a standby LPAR.
  • API Acronym. Stands for Application Programming Interface. A type of software interface. It allows two or more programs to communicate using a set of definitions and protocols.
  • Application Programming Interface (API) A type of software interface. It allows two or more programs to communicate using a set of definitions and protocols.
  • Asynchronous When two or more events or objects do not exist or happen simultaneously. Especially when a specific operation begins after the preceding operation ends.
  • AWS Acronym. Stands for Amazon Web Services. A service from Amazon that provides on-demand cloud computer platforms and Application Programming Interfaces (API).
  • Backup, Recovery, and Media Services (BRMS) Backup and recovery software from IBM. Provides an orderly way to retrieve lost or damaged data.
  • Boot Image A boot image is a disk image. Specifically, a disk image that allows the associated hardware to boot.
  • Classless Inter-Domain Routing (CIDR) First introduced in 1993. It is a method for allocating IP address and for IP routing.
  • CLI Acronym. Stands for Command Line Interface. A text-based user interface for a program or computer.
  • Command Line Interface (CLI) A text-based user interface for a program or computer.
  • Console A program that runs in a command prompt window. A user can input commands and view the output, such as results or status messages.
  • Converge Technologies Converge is a Software-Enabled IT & Cloud Solutions Provider Focused on Delivering Industry-Leading Solutions and Services. In the Google Marketplace specifically, Converge is the provider the provider of IBM Power Server hardware.
  • DB2 IBM product. A hybrid relational and XML data server.
  • Dedicated Service Tools (DST) Service functions that are only available from the console. Can run both when the operating system is available or unavailable.
  • Disaster Recovery (D/R) or (DR) The method of regaining access and functionality of IT infrastructure after a disaster.
  • (D/R) or (DR) Acronym. Stands for Disaster Recovery.
  • Electronic Service Agent (ESA) A monitoring tool that proactively reports both hardware and software events. They are reported as soon as they are detected.
  • Entitled Software Support (ESS) Support for software where the software is covered by a valid Software Maintenance Agreement (SWMA).
  • EoSPS Acronym. Stands for End of Service Pack Support. That is the end of the maintenance period for a service pack or technology level.
  • FlashCopy FlashCopy is a function available for use in IBM’s storage systems. It can create copies which are immediately available for use.
  • FTP Acronym. Stands for File Transfer Protocol. A network protocol for transmitting files over Transmission control Protocol/Internet Protocol (TCP/IP).
  • Geographic Logical Volume Manager (GLVM) A software-based mirroring method. Allows for mirroring of data in real-time over unlimited geographic distance.
  • Geographic Mirroring Abbreviated as Geomirroring. A method of logical mirroring, with a stored, consistent backup copy. That copy is typically kept at a different location, separated by geography.
  • Geomirroring Abbreviation. Shortened form of Geographic Mirroring. A method of logical mirroring, with a stored, consistent backup copy. That copy is typically kept at a different location, separated by geography.
  • Google Cloud Organization The organization resource is the root node in the Google Cloud resource hierarchy and is the hierarchical super node of projects. For more information about how to acquire and manage an organization resource, see: https://cloud.google.com/resource-manager/docs/creating-managing-organization
  • Google Cloud Platform (GCP) Google’s public cloud services. Customers are able to use resources housed in Google’s data centers.
  • Google Cloud Project Mechanism in Google Cloud to organize all GCP resources. For more information, see: https://cloud.google.com/storage/docs/projects
  • Google Direct Interconnect A direct physical connection between an on-premises network and Google’s network.
  • Google Cloud Storage (GCS) A service for storing objects in Google Cloud.
  • GUI Acronym. Stands for Graphic User Interface. A method for users to interact with a system using icons and representations for files and applications.
  • HADR Acronym. Stands for High Availability Disaster Recovery. A method of providing disaster recovery with small to no loss of function for IT infrastructure.
  • Hardware Management Console (HMC) A console used to manage hardware. They typically provide a Graphic User Interface (GUI) and a Command Line Interface (CLI) for configuring and operating multiple managed systems.
  • High Availability (HA) Systems with High Availability operate continuously and are always available. In those systems, steps have been taken to avoid
  • HMC Acronym. Stands for Hardware Management Console. A console used to manage hardware. They typically provide a Graphic User Interface (GUI) and a Command Line Interface (CLI) for configuring and operating multiple managed systems.
  • Host Physical hardware executing virtual machines.
  • HTTP Acronym. Stands for Hypertext Transfer Protocol. A set of rules for transferring files. It is the foundation of data communication for the World Wide Web.
  • I/O Acronym. Stands for Input/Output. Refers to the operation of putting something in and getting something out in return. For example, a system receiving a command, and sending a signal in response to that command.
  • I/O processor (IOP) A processor that specifically handles Input/Output tasks.
  • IaaS Abbreviation. Stands for Infrastructure as a Service. A type of cloud computing where compute and storage resources are kept in the cloud.
  • IBM Cloud Storage Solutions for i (ICC) An IBM product. It allows clients to store data to the cloud for archiving, file sharing, and backup recovery.
  • IBM i An operating system developed by IBM for use on IBM Power systems.
  • IBM i Access Client Solutions (ACS) A Java based, platform independent, interface. It will run on most operating systems that support Java.
  • IBM Spectrum Protect An IBM product. It provides scalable data protection. That production can cover physical file servers, applications, and virtual environments. Previously known as IBM TSM.
  • Image In computing it is a copy of the entire contents of a storage device. It represents a precise copy of the original, including data and organization.
  • Initial Program Load (IPL) The first initial step of loading an operating system on a computer. For example, loading the operating system of a mainframe into its main memory.
  • IP4G Abbreviation. Stands for IBM Power for Google Cloud.
  • IP4G VM Service Container for all included IP4G Virtual Machine instances within a Google Geographic Region.
  • IPL Acronym. Stands for Initial Program Load. The first initial step of loading an operating system on a computer. For example, loading the operating system of a mainframe into its main memory.
  • Journaled File System (JFS) A journaling file system created by IBM. With JFS, changes to files are recorded in a separate log, the journal. This record is committed before the indexes to the file are updated.
  • Journaled File System 2 (JFS2) A journaling file system created by IBM. Refers specifically to the Enhanced Journaled File System. Created after IBM’s Journaled File System (JFS).
  • Jump server A system on a network for accessing and managing devices in a separate security zone.
  • Licensed Internal Code (LIC) Specifically, software for POWER6 systems that enable hardware on a system. It initializes the hardware so that the system boots up, operates correctly, and provides an interface.
  • Licensed Program Products (LPP) A complete software product that may include one or more filesets as well as packages.
  • Linux An operating system. Linux is an open-source, Unix-like operating system.
  • Logical Volume Manager (LVM) A system of managing filesystems or logical volumes.
  • LPAR Abbreviation. Stands for Logical Partition. An LPAR is a virtual division of a computer’s resources, enabling that set of resources to act independently. Sometimes used interchangeably with Virtual Machine (VM), however there are subtle differences.
  • LVM Acronym. Stands for Logical Volume Manager. A system of managing filesystems or logical volumes.
  • Main storage dump (MSD) A process of collecting data from a system’s main storage.
  • mksysb image A file that is a system backup image. Created with the mksysb command.
  • N_Port ID Virtualization (NPIV) A Fibre Channel standard. It makes it possible to create multiple virtual ports on a single physical node port.
  • NAT gateway NAT is an acronym. It stands for Network Address Translation. NAT gateway then refers to a service that allows instances in a private subnet to connect to outside services. However, it also prevents external services from initiating a connection to them.
  • Network Installation Manager (NIM) An object-oriented system management framework. It installs and manages systems over a network.
  • Network Interface Card (NIC) A computer hardware component that connects a system to a network. It then controls data traffic to and from that connected network. Can also refer to Network Interface Controller.
  • NFS Acronym. Stands for Network File Server or Network File System. It is a method of storing files on a network. Specifically, one that allows users to access remote files and directories as if they were local to the user.
  • NIM Acronym. Stands for Network Installation Management. May also stand for Network Installation Manager. Refers to a server, or the server’s process, of managing the installation of software. A NIM can manage the installation of a base operating system and any optional software. It can do this to one or more machines.
  • NPIV Acronym. Stands for N_Port ID Virtualization. A Fibre Channel standard. It makes it possible to create multiple virtual ports on a single physical node port.
  • On-premises Refers to resources or devices located at the same physical location. If a business has a physical site, anything located at that site would be considered on-premises.
  • Open Virtualization Appliance (OVA) An Open Virtualization Format (OVF) package, in a single file archive. It contains files for distribution of software to run on a virtual machine.
  • Open Virtualization Format (OVF) An open standard for packaging and distributing virtual appliances or software, for use in virtual machines.
  • pcloud A Command Line Interface (CLI). It is used for creating and managing virtual machines.
  • PDU Acronym. Stands for Power Distribution Unit. A device for controlling electrical power. Some resemble basic power strips. Others have additional features, such as surge protection.
  • Pip installs Python (pip) Pip is a package management system that manages packages written in Python.
  • PowerHA IBM product. Provides clustering technology with both failover protection, through redundancy, and scalability.
  • PowerHA SystemMirror IBM product. A storage-based clustering solution using mirroring to prevent hardware and software failures.
  • PowerVC IBM product. Software that enables use of Infrastructure as a Service (IaaS) for IBM Power systems.
  • PowerVC Manager IBM Product. Works with PowerVM to provision workloads and manage virtual images.
  • PowerVM IBM product. Provides scalable server virtualization for AIX, IBM i, and Linux applications for IBM Power systems.
  • Program temporary fix (PTF) A single or multiple groups of fixes for an issue. The PTF is issued so as to be ready to install. When these fix a problem, they become permanent parts of the software they fix.
  • PuTTY A terminal emulator application. It can act as a client for computing protocols, including: SSH, Telnet, rlogin, and TCP.
  • Reliable Scalable Cluster Technology (RSCT) An IBM product. It is a set of software products that provide a clustering environment for AIX and Linux.
  • Resource Optimized High Availability (ROHA) A function in IBM’s PowerHA SystemMirror. It automatically and dynamically manages Dynamic Logical Partition (DLPAR) resources.
  • REST API Combined Acronym. REST stands for Representational State Transfer. API stands for Application Programming Interface. Together they refer to a method of allowing two systems to communicate using HTTP protocols.
  • S3 Abbreviation. Stands for Simple Storage Service, an object storage service from AWS.
  • SAN Acronym. Stands for Storage Area Network. Refers to a network of storage devices that provide a shared pool of storage space.
  • Secure Copy Protocol (SCP) A method of transferring files between a local host and a remote host. It is based on SSH.
  • Service Packs (SP) Collections of updates, enhancements, and fixes for software. Typically packaged as an individual, installable, bundle.
  • Service Update Management Assistant (SUMA) Software for the AIX operating system. With little configuration it can automatically compare updates, fix repositories, and update a system.
  • Shared Storage Pools (SSP) A server-based storage virtualization method. It provides distributed storage access to a VIOS for client partitions.
  • SMIT Acronym. Stands for System Management Interface Tool. An interactive tool bundled with AIX. It uses standard AIX commands and Korn shell functions. It can accomplish almost any system administration related tasks using a SMIT screen.
  • SMT Abbreviation. Stands for Simultaneous multithreading. A processor technology that allows multiple threads to run at the same time on the same processor.
  • SSD Acronym. Stands for Solid State Drive. A type of storage device that typically uses flash memory, instead of the spinning disk and movable read-write heads used in hard disk drives.
  • SH Abbreviation. Stands for Secure Shell or Secure Socket Shell. A network protocol that provides a secure method of communication through an unsecured network.
  • Standard output (STDOUT) A stream of data output, produced by command line programs, which conforms to a standard.
  • Storage Area Network (SAN) Refers to a network of storage devices that provide a shared pool of storage space.
  • SUMA Acronym. Stands for Service Update Management Assistant. It is software for the AIX operating system. With little configuration it can automatically compare updates, fix repositories, and update a system.
  • System reference code (SRC) A set of number and letter characters that represent a message from the system. That message gives information on system trouble, hardware, or software failure, or just a status.
  • System Service Tools (SST) Part of the service function. Used on the system while the operating system is running.
  • TCP/IP address A method of assigning a value to a network or host. That value can be used to identify the host, allowing users and applications to communicate with it. This method uses Transmission Control Protocol and Internet Protocol, or TCP/IP.
  • Technology Level (TL) The relative level of development present in a software release, such as an operating system.
  • Transmission Control Protocol (TCP) One of the main protocols of the internet protocol suite. Typically used in conjunction with Internet Protocol (IP), as TCP/IP. Provides for reliable, error checked, and orderly delivery of information over an IP network.
  • Veeam Veeam Software, an information technology company. They specialize in backup, disaster recovery, and modern data protection.
  • VIOS Acronym. Stands for Virtual I/O Server. Software that allows for the sharing of physical I/O resources between client logical partitions within a server.
  • Virtual Ethernet Adapter (VEA) Software that operates like a physical network adapter. It allows logical partitions within the same system to communicate without a physical Ethernet adapter.
  • Virtual Local Area Network (VLAN) A method of isolating the traffic for a select group of devices that share a physical local area network. This separates their traffic from the traffic of other devices on that same network.
  • Virtual Machine (VM) Compute Resource using virtualized hardware to execute programs and applications. Multiple Virtual Machines could be executing on a single host system. The term Virtual Machine is similar to Logical Partition.
  • Virtual Machine Image (VMI) An executable image file from a virtual machine. It contains a virtual disk with a bootable operating system.
  • Virtual Machine Pinning IP4G policy related to the association level between an IP4G Virtual Machine and its Host.
  • Virtual Processor (VP) A representation of a physical processor, used by virtual machines.
  • Virtual Tape Library (VTL) A virtualization of the data storage method using tape libraries or tape drives. Typically used for backup and recovery purposes.
  • VM Instance Individual Virtual Machine (VM) or Logical Partition within a VM Service.
  • VM Pinning Setting a Virtual Machine (VM) to run on a specific host or hosts. This prevents it from running on any other hosts.
  • Volume Storage space formatted to hold directories and files. May be virtual or physical.
  • Volume group A storage pool across one or more physical volumes.
  • VPC Acronym. Stands for Virtual Private Cloud. A virtual, isolated, private cloud hosted on a public cloud.
  • VPC Peering A connection between Virtual Private Clouds (VPC). Makes it possible to establish a network between two VPCs.
  • VPN A private encrypted network running over a public network, such as the internet.
  • Wide Area Network (WAN) A computer network where the connected computers may be far apart.
  • Yellowdog Updater, Modified (YUM) An open-source package management utility, originally developed for Yellowdog Linux.

7 - Videos

Videos showing how to do tasks in IP4G

Watch the following videos to better understand the IBM Power Systems™ for Google Cloud service.

7.1 - Create a Subscription

Videos showing how to create a subscription to IP4G

Learn how to subscribe to the IBM Power Systems for Google service.

7.2 - Creating an Instance

Videos showing how to create an instance in IP4G

Creating an IBM Power Systems for Google instance

Learn how to create an instance by using the IBM Power Systems for Google service.

8 - API Documentation

Documentation for the IP4G Swagger API

You can use the IBM Power for Google Cloud API to easily deploy and configure virtual servers that are running AIX or IBM i workloads. For a list of all of the available API methods, go to IBM Power for Google Cloud API. To target the IBM Power for Google Cloud API, use the following base URL, service-broker-api.gpcloudtest.com.

8.1 - API Authentication

IBM Power for Google Cloud API Authentication

To work with the API, you must log in and authenticate yourself. Take note of your Authorization: Bearer <token> and Cloud ID. The Authorization: Bearer <token> is short lived. It will need to be refreshed. Enter the following commands to log in and obtain the necessary information:

  1. pcloud auth login
  2. pcloud auth print-access-token
  3. pcloud compute clouds list

You can also enter the following URL into any browser to obtain your accessToken: https://service-broker-api.gpcloudtest.com/auth/v1/login. Select your Google account and copy the accessToken.

8.2 - API Error Handling

IBM Power for Google Cloud API Error Handling

This API uses standard HTTP response codes to indicate whether a method completed successfully. A 200 response indicates success. A 400 type response indicates a failure, and a 500 type response indicates an internal system error.

HTTP error codeDescriptionRecovery
200SuccessThe request was successful.
400Bad RequestThe input parameters in the request body are either incomplete or in the wrong format. Be sure to include all required parameters in your request.
401UnauthorizedYou are not authorized to make this request. Log in to IBM Cloud and try again. If this error persists, contact the account owner to check your permissions.
403ForbiddenThe supplied authentication is not authorized to access this resource.
404Not FoundThe requested resource could not be found.
408Request TimeoutThe connection to the server timed out. Wait a few minutes, then try again.
409ConflictThe entity is already in the requested state.
500Internal Server ErrorPower Systems Virtual Server is currently unavailable. Your request could not be processed. Wait a few minutes and try again.

9 - FalconStor StorSafe VTL for Google Cloud Overview

FalconStor StorSafe VTL for Google Cloud Overview

FalconStor StorSafe VTL for Google Cloud is the simplest way to deploy a modern Virtual Tape Library solution with Google Cloud. The subscription provides a StorSafe VTL license and deployment model for Google Cloud allowing you the flexibility to provide the infrastructure needed to suit your RTO, RPO, and cost requirements.

NOTE: Customers must have an active IBM Power for Google Cloud subscription to subscribe. If you’re interested in using FalconStor StorSafe VTL for Google Cloud without IBM Power for Google Cloud contact sales about your use case.

Features include:

  • Copatibility: StorSafe VTL is compatible with any leading backup solution that uses physical or virtual tapes. Customers can quickly modernize existing IBM i, AIX, or Linux backup solutions without significantly changing the backup solution

  • Up to 95% Data Reduction: Using a Single Instance Repository (SIR) for block-level deduplication, the solution removes redundant copies of data, reducing capacity requirements and improving data portability in the cloud.

  • Ransomware Protection: WORM (Write Once, Read Many) capability allows non-rewritable and non-erasable data to be written to virtual tapes, providing extra data security by prohibiting accidental or unplanned data erasure. Virtual tapes are written once and cannot be altered even by an administrator.

  • Cost Optimized Cloud Storage: Use Google Cloud Storage as the VTL deduplication data repository to reduce storage costs. Copy or move virtual tapes to Google Cloud Storage for cost optimized, long term storage.

  • Simplify Cloud Migration: Import physical tape libraries and migrate data to Google Cloud. This provides an inexpensive solution for preserving data and renewing backups in Google Cloud where no physical tape libraries or drives are required.

  • Google Cloud Billing Integration: All billing is handled by Google Cloud to simplify your cloud FinOps practice.

  • Multi-Region Replication: Deploy multiple StorSafe VTL appliances to replicate data between Google Cloud regions for disaster recovery.

Dive deeper

9.1 - FalconStor StorSafe VTL for Google Cloud Support

Support information for FalconStor StorSafe VTL for Google Cloud

Converge Technology Solutions offers customers using Falconstor and IBM Power for Google Cloud unmatched technical and operational support, ensuring a seamless end-to-end experience. Our dedicated team of expert support engineers is available 24/7 through our global service desk for efficient end-to-end ticket management and resolution. We prioritize excellence in platform performance and operational reliability, delivering an exceptional customer experience every step of the way. Contact support using the Converge Support Support Portal