Fixing ‘Invalid access config name’ error on CheckPoint

One of the many stupid things for the CheckPoint CloudGuard IaaS appliances in GCP is Checkpoint never took in to account scenarios where multiple clusters exist within the same project and/or same network. This results in a naming conflict for the static routes & access config, and the default behavior will be for different clusters to “steal” routes IP addresses from the others.

To fix this, the first step is give each cluster a unique name. This can be fairly easily done by setting CHKP_TAG in the Python script $FWDIR/scripts/gcp_had.py

CHKP_TAG = cluster-1

This variable influences the route and access config names. But that still won’t be enough, because their deployment script hard-codes the access config name, so failover still won’t work. You’ll see this in $FWDIR/log/gcp_had.elg during a failover event:

2024-03-28 23:09:44,259-GCP-CP-HA-ERROR- Operation deleteAccessConfig for https://www.googleapis.com/compute/v1/projects/project-1234/zones/us-west2-b/instances/checkpoint-member-b error OrderedDict([('errors', [OrderedDict([('code', 'INVALID_USAGE'), ('message', 'Invalid access config name `checkpoint-access-config` as the access config name in instance is `x-chkp-access-config`.')])])])

To fix this, the existing access config names must be manually deleted on both members:

gcloud compute instances delete-access-config checkpoint-member-a --zone=us-west2-a --access-config-name="x-chkp-access-config"

gcloud compute instances delete-access-config checkppoint-member-b --zone=us-west2-b --access-config-name="x-chkp-access-config"

Then perform a rolling reboot of both members, and failover should work now.

Getting an Access Token from a Service Account Key

This one took me a while to figure out, but if you want to get an Google access token from a Service Account key (JSON file), do this:

 
import json
import google.oauth2
import google.auth.transport.requests
import requests


KEY_FILE = "../private/my-project-key.json"
SCOPES = ['https://www.googleapis.com/auth/cloud-platform']


# Parse the key file to get its project ID
try:
with open(KEY_FILE, 'r') as f:
_ = json.load(f)
project_id = _.get('project_id')
except Exception as e:
quit(e)

try:
credentials = google.oauth2.service_account.Credentials.from_service_account_file(KEY_FILE, scopes=SCOPES)
_ = google.auth.transport.requests.Request()
credentials.refresh(_)
access_token = credentials.token
except Exception as e:
quit(e)

This token can then be used for Rest API calls by inserting in to the Authorization header like this:

instances = []

try:
    url = f"https://compute.googleapis.com/compute/v1/projects/{project_id}/aggregated/instances"

    headers = {'Authorization': f"Bearer {access_token}"}
    response = requests.get(url, headers=headers)
_ = response.json().get('items')
for k, v in _.items():
instances.extend(v.get('instances', []))
except Exception as e:
quit(e)

print([instance.get('name') for instance in instances])

Testing an Outbound SMTP relay using Telnet

I think this may go down as the most valuable skill I’ve learned my entire career.

$ telnet smtp-relay.myisp.com 25
Trying 10.10.32.1...
Connected to smtp-relay.myisp.com.
Escape character is '^]'.
220 myisp.net ESMTP MAIL Service ready - Wed, 18 Oct 2023 16:44:08 GMT
helo localhost
250 myisp.net Hello [10.98.76.54], pleased to meet you
mail from: me@mydomain.com
250 2.1.0 me@mydomain.com... Sender ok
rcpt to: me@mydomain.com
250 2.1.5 me@mydomain.com... Recipient ok
data
354 Enter mail, end with "." on a line by itself
From: me@mydomain.com
To: me@mydomain.com
Subject: Test

Hello there
.
250 2.0.0 39IGi8N5006729 Message accepted for delivery
QUIT
221 2.0.0 myisp.net closing connection
Connection closed by foreign host.

Migrating from oauth2client to google-auth for retrieving Google ADC access token in Python

I have some scripts designed to perform basic troubleshooting and admin tasks in GCP. The authentication is handled via Google Default Application credentials, where the user runs this command:

gcloud auth application-default login

This creates a credentials file, which is $HOME/.config/gcloud/application_default_credentials.json in similar. Alternately, they can authenticate as a service account by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable. In my Python code, I read the credentials and retrieve an access token:

from oauth2client.client import GoogleCredentials

credentials = GoogleCredentials.get_application_default()
access_token = credentials.get_access_token().access_token

Which I can then use to make HTTPS calls as the user, using IAM permissions in GCP to control access to projects and resources.

Problem is, oauth2client has been deprecated since 2019. The recommended replacement is google-auth. I had a heck of a time finding a simple real-world example get the access token, but here it is. The trick is use default() to get the credentials, then refresh() to generate a fresh access token:

from google.auth import default
from google.auth.transport.requests import Request

SCOPES = ['https://www.googleapis.com/auth/cloud-platform']

credentials, project_id = default(scopes=SCOPES, quota_project_id='xxxxxx')
credentials.refresh(Request())
access_token = credentials.token

An alternate solution is use Credentials.from_authorized_user_file() to read the credentials. It is faster, but takes some extra work to determine the location of the JSON file. Assuming GOOGLE_APPLICATION_CREDENTIALS is set:

from os import environ
from google.oauth2.credentials import Credentials

_ = environ.get('GOOGLE_APPLICATION_CREDENTIALS')
credentials = Credentials.from_authorized_user_file(_, scopes=SCOPES)

BTW, to instead use a Service Account for authentication, just use these two lines instead:

from google.oauth2.service_account import Credentials

credentials = Credentials.from_service_account_file(_, scopes=SCOPES)

Using the GCP API to get BGP AS Path Information

One of the trickier parts about building complex hybrid cloud networks is it’s difficult to troubleshoot scenarios where there’s multiple paths.

GCP doesn’t offer a way to view this in Web UI or CLI (gcloud), but it is accessible via the routers.getRouterStatus() method in the API. This essentially queries a specific cloud router for detailed BGP information. The JSON will look like this. Of most interest is the asPaths list.

{
  "kind": "compute#routerStatusResponse",
  "result": {
    "network": "https://www.googleapis.com/compute/v1/projects/xxx/global/networks/yyy",
    "bestRoutes": [
      {
        "kind": "compute#route",
        "creationTimestamp": "2023-06-14T13:17:31.690-07:00",
        "network": "https://www.googleapis.com/compute/v1/projects/xxx/global/networks/yyy",
        "destRange": "10.20.30.0/24",
        "priority": 100,
        "nextHopIp": "169.254.22.73",
        "nextHopVpnTunnel": "https://www.googleapis.com/compute/v1/projects/xxx/regions/us-east4/vpnTunnels/vpn-0",
        "routeType": "BGP",
        "asPaths": [
          {
            "pathSegmentType": "AS_SEQUENCE",
            "asLists": [
              4200000000
            ]
          }
        ]
      },
      {
        "kind": "compute#route",
        "creationTimestamp": "2023-06-14T13:17:31.690-07:00",
        "network": "https://www.googleapis.com/compute/v1/projects/xxx/global/networks/yyy",
        "destRange": "10.20.30.0/24",
        "priority": 100,
        "nextHopIp": "169.254.22.74",
        "nextHopVpnTunnel": "https://www.googleapis.com/compute/v1/projects/xxx/regions/us-east4/vpnTunnels/vpn-1",
        "routeType": "BGP",
        "asPaths": [
          {
            "pathSegmentType": "AS_SEQUENCE",
            "asLists": [
              4200000000
            ]
          }
        ]
      },
    ]
  }
}

Getting started with CheckPoint R81.10 Management API

Finally got some time to start exploring the CheckPoint management server’s API via web. As with most vendors, the tricky part was understanding the required steps for access and making basic calls. Here’s a quick walk-through.

Getting Management API Access

By default, access is only permitted from the Management server itself. To change this, do the following:

  1. In SmartConsole, navigate to Manage & Settings -> Blades -> Management API

2. Change this to “All IP Addresses that can used by GUI clients” or simply “All IP Addresses”.

3. Click OK. You’ll see a message about restarting API

4. Click the the “Publish” button at the top

5. SSH to the Management Server and enter expert mode. Then enter this command:

api restart

6. After the restart is complete, use the command api status to verify the accessibility is no longer “Require Local”

[Expert@chkp-mgmt-server:0]# api status

API Settings:
---------------------
Accessibility:                      Require all granted
Automatic Start:                    Enabled

Verifying API Permissions

While in Smart Console , also verify that your account and permission profile has API login access by examining the Permission profile and look under the “Management” tab. This should be true by default.

Generating a Session Token

Now we’re ready to hit the API. First step generally is do a POST to /web_api/login to get a SID (session token). There are two required parameters: ‘user’ and ‘password’. Here’s a postman example. Note the parameters are raw JSON in the body (not the headers):

Making an actual API Call

With the SID obtained, we can copy/paste it and start sending some actual requests. There’s a few things to keep in mind

  • The requests are always POST, even if retrieving data
  • Two headers must be included: X-chkp-sid (which is the sid generated above) and Content-Type (which should be ‘application/json’)
  • All other parameters are set in the body. If no parameters are required, the body must be an empty object ({})

Here’s another Postman example getting the a list of all Star VPN Communities:

Retrieving details on specific objects

To get full details for a specific object, we have to specify the name or uuid in the POST body. For example, to get more information about a specific VPN community, make a request to /web_api/show-vpn-community-star with this:

{
    "uid": "fe5a4339-ff15-4d91-bfa2-xxxxxxxxxx"
}

You’ll get back an object (aka python dictionary) back.

Installing AWS CLI Tools v2 on FreeBSD 12.4

I upgraded my FreeBSD VM from 11 to 12 last weekend. Installing Google Cloud SDK was no problem; just use the FreeBSD package:

pkg install google-cloud-sdk

But for AWS CLI tools, there’s only a package for AWS CLI Tools version 1 . CLI Tools version 2 has been out for 3 years now, so we really don’t want to still be using v1.

Usually there’s a simple work-around: since AWS CLI tools is mostly Python scripts, you can install it via PIP. First, verify the version of Python installed, then install PIP3 for that version:

python -V
Python 3.9.16

pkg install py39-pip

Then install AWS CLI Tools v2 via PIP:

pip install awscliv2

But when we go to complete the install, we get this error:

awscliv2 --install
09:27:26 - awscliv2 - ERROR - FreeBSD amd64 is not supported, use docker version

This is because AWS CLI v2 does rely on a binary, and is only compiled for Linux. We can work around this by enabling Linux emulation.


Activating Linux Emulation in FreeBSD

First, add the following lines to /etc/rc.conf

linux_enable="YES"

Then either run this command, or simply reboot:

service linux start

Also install the CentOS 7 base from packages:

pkg install linux_base-c7

Completing install of AWS CLI Tools v2

Add the following line near the bottom of /usr/local/lib/python3.9/site-packages/awscliv2/installers.py to allow it to support FreeBSD:

    if os_platform == "FreeBSD" and arch == "amd64":
        return install_linux_x86_64()

Now we can complete the install successfully:

arnie@freebsd:~ % awscliv2 --install
10:07:13 - awscliv2 - INFO - Installing AWS CLI v2 for Linux
10:07:13 - awscliv2 - INFO - Downloading package from https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip to /tmp/tmpan_dsh2c.zip
10:07:17 - awscliv2 - INFO - Extracting /tmp/tmpan_dsh2c.zip to to /tmp/tmpnwvl45tn
10:07:26 - awscliv2 - INFO - Installing /tmp/tmpnwvl45tn/aws/install to /home/arnie/.awscliv2
10:08:37 - awscliv2 - INFO - Now awsv2 will use this installed version
10:08:37 - awscliv2 - INFO - Running now to check installation: awsv2 --version

Verify the binary and libraries are installed correctly:

~/.awscliv2/v2/current/dist/aws --version
aws-cli/2.11.10 Python/3.11.2 Linux/3.2.0 exe/x86_64.centos.7 prompt/off

You’ll probably want to include this directory in your path. Since I use TCSH, I do this by adding this line to ~/.cshrc:

set path = (/sbin /bin /usr/sbin /usr/bin /usr/local/sbin /usr/local/bin $HOME/bin $HOME/.awscliv2/v2/2.11.10/dist)

You’re now ready to configure aws cli tools v2. Run this command:

aws configure

Or, just manually setup the files ~/.aws/config and ~/.aws/credentials. Then try out a command.

aws s3 ls

Use the AWS_PROFILE and AWS_REGION environment variable to override the defaults configured in ~/.aws/config

Google Cloud Internal HTTP(S) Load Balancers now have global access support

Previously, the envoy-based Internal HTTP(S) load balancers could only be accessed within the same region. For orgs that leverage multiple regions and perform cross-region traffic, this limitation was a real pain point, and not a problem for AWS ALBs. So, I’m glad to see it’s now offered:

Oddly, the radio button only shows up during the ILB creation. To modify an existing one, use this gcloud command:

gcloud compute forwarding-rules update NAME --allow-global-access

Or, in Terraform:

resource "google_compute_forwarding_rule" "default" {
  allow_global_access   = true
}

It’s also important to be aware that Global access on the HTTP(S) ILB must be enabled if accessing from another load balancer via PSC. If not, you’ll get this error message:

 Error 400: Invalid value for field 'resource.backends[0]': '{  "resourceGroup": "projects/myproject/regions/us-west1/networkEndpointGroups/psc-backend", ...'. Global L7 Private Service Connect consumers require the Private Service Connect producer load b
alancer to have AllowGlobalAccess enabled., invalid