Getting started with CheckPoint R81.10 Management API

Finally got some time to start exploring the CheckPoint management server’s API via web. As with most vendors, the tricky part was understanding the required steps for access and making basic calls. Here’s a quick walk-through.

Getting Management API Access

By default, access is only permitted from the Management server itself. To change this, do the following:

  1. In SmartConsole, navigate to Manage & Settings -> Blades -> Management API

2. Change this to “All IP Addresses that can used by GUI clients” or simply “All IP Addresses”.

3. Click OK. You’ll see a message about restarting API

4. Click the the “Publish” button at the top

5. SSH to the Management Server and enter expert mode. Then enter this command:

api restart

6. After the restart is complete, use the command api status to verify the accessibility is no longer “Require Local”

[Expert@chkp-mgmt-server:0]# api status

API Settings:
---------------------
Accessibility:                      Require all granted
Automatic Start:                    Enabled

Verifying API Permissions

While in Smart Console , also verify that your account and permission profile has API login access by examining the Permission profile and look under the “Management” tab. This should be true by default.

Generating a Session Token

Now we’re ready to hit the API. First step generally is do a POST to /web_api/login to get a SID (session token). There are two required parameters: ‘user’ and ‘password’. Here’s a postman example. Note the parameters are raw JSON in the body (not the headers):

Making an actual API Call

With the SID obtained, we can copy/paste it and start sending some actual requests. There’s a few things to keep in mind

  • The requests are always POST, even if retrieving data
  • Two headers must be included: X-chkp-sid (which is the sid generated above) and Content-Type (which should be ‘application/json’)
  • All other parameters are set in the body. If no parameters are required, the body must be an empty object ({})

Here’s another Postman example getting the a list of all Star VPN Communities:

Retrieving details on specific objects

To get full details for a specific object, we have to specify the name or uuid in the POST body. For example, to get more information about a specific VPN community, make a request to /web_api/show-vpn-community-star with this:

{
    "uid": "fe5a4339-ff15-4d91-bfa2-xxxxxxxxxx"
}

You’ll get back an object (aka python dictionary) back.

Advertisement

Installing AWS CLI Tools v2 on FreeBSD 12.4

I upgraded my FreeBSD VM from 11 to 12 last weekend. Installing Google Cloud SDK was no problem; just use the FreeBSD package:

pkg install google-cloud-sdk

But for AWS CLI tools, there’s only a package for AWS CLI Tools version 1 . CLI Tools version 2 has been out for 3 years now, so we really don’t want to still be using v1.

Usually there’s a simple work-around: since AWS CLI tools is mostly Python scripts, you can install it via PIP. First, verify the version of Python installed, then install PIP3 for that version:

python -V
Python 3.9.16

pkg install py39-pip

Then install AWS CLI Tools v2 via PIP:

pip install awscliv2

But when we go to complete the install, we get this error:

awscliv2 --install
09:27:26 - awscliv2 - ERROR - FreeBSD amd64 is not supported, use docker version

This is because AWS CLI v2 does rely on a binary, and is only compiled for Linux. We can work around this by enabling Linux emulation.


Activating Linux Emulation in FreeBSD

First, add the following lines to /etc/rc.conf

linux_enable="YES"

Then either run this command, or simply reboot:

service linux start

Also install the CentOS 7 base from packages:

pkg install linux_base-c7

Completing install of AWS CLI Tools v2

Add the following line near the bottom of /usr/local/lib/python3.9/site-packages/awscliv2/installers.py to allow it to support FreeBSD:

    if os_platform == "FreeBSD" and arch == "amd64":
        return install_linux_x86_64()

Now we can complete the install successfully:

arnie@freebsd:~ % awscliv2 --install
10:07:13 - awscliv2 - INFO - Installing AWS CLI v2 for Linux
10:07:13 - awscliv2 - INFO - Downloading package from https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip to /tmp/tmpan_dsh2c.zip
10:07:17 - awscliv2 - INFO - Extracting /tmp/tmpan_dsh2c.zip to to /tmp/tmpnwvl45tn
10:07:26 - awscliv2 - INFO - Installing /tmp/tmpnwvl45tn/aws/install to /home/arnie/.awscliv2
10:08:37 - awscliv2 - INFO - Now awsv2 will use this installed version
10:08:37 - awscliv2 - INFO - Running now to check installation: awsv2 --version

Verify the binary and libraries are installed correctly:

~/.awscliv2/v2/current/dist/aws --version
aws-cli/2.11.10 Python/3.11.2 Linux/3.2.0 exe/x86_64.centos.7 prompt/off

You’ll probably want to include this directory in your path. Since I use TCSH, I do this by adding this line to ~/.cshrc:

set path = (/sbin /bin /usr/sbin /usr/bin /usr/local/sbin /usr/local/bin $HOME/bin $HOME/.awscliv2/v2/2.11.10/dist)

You’re now ready to configure aws cli tools v2. Run this command:

aws configure

Or, just manually setup the files ~/.aws/config and ~/.aws/credentials. Then try out a command.

aws s3 ls

Use the AWS_PROFILE and AWS_REGION environment variable to override the defaults configured in ~/.aws/config

Authenticating to Google Cloud Platform via OAuth2 with Python

For most of my troubleshooting tools, I want to avoid the security concerns that come with managing service accounts. Using my account also lets me access multiple projects. To do the authentication in Python, I’d originally installed google-api-python-client and then authenticated using credentials=None

from googleapiclient.discovery import build

try:
    resource_object = build('compute', 'v1', credentials=None)
except Exception as e:
    quit(e)

This call was a bit slow (2-3 seconds) and I was wondering if there was a faster way. The answer is ‘yes’ – just use OAuth2 tokens instead. Here’s how.

If not done already, generate a login session via this CLI command:

gcloud auth application-default login

You can then view its access token with this CLI command:

gcloud auth application-default print-access-token

You should see a string back that’s around 200 characters long. Now we’re ready to try this out with Python. First, install the oauth2client package:

pip3 install oauth2client

Now the actual python code to get that same access token:

from oauth2client.client import GoogleCredentials

try:
    creds = GoogleCredentials.get_application_default()
except Exception as e:
    quit(e)

print("Access Token:", creds.get_access_token().access_token)

This took around 150-300 ms to execute which is quite a bit faster and reasonable.

If using raw HTTP calls via requests, aiohttp, or http.client, set a header with ‘Authorization’ as the key and ‘Bearer <ACCESS_TOKEN>’ as the value.

Reading TOML in Python

Last year I started hearing more about TOML, which is a markup language (like YAML, JSON, and XML) which reminds me of configparser. The nice thing about TOML is the syntax and formatting very closely resemble Python and Terraform syntax, so it’s a very natural thing to use and is worthwhile learning

To install the package via PIP:

sudo pip3 install tomli

On FreeBSD, use a package:

pkg install py39-tomli

Create some basic TOML in a “test.toml” file:

[section_a]
key1 = "value_a1"
key2 = "value_a2"

[section_b]
name = "Frank"
age = 51
alive = true

[section_c]
pi = 3.14

Now some Python code to read it:

import tomli
from pprint import pprint

with open("test.toml", mode="rb") as fp:
    pprint(tomli.load(fp))

When run, produces the following:

{'section_a': {'key1': 'value_a1', 'key2': 'value_a2'},
 'section_b': {'age': 51, 'alive': True, 'name': 'Frank'},
 'section_c': {'pi': 3.14}}

Sorting IP addresses in Python

Trying to sort by IP addresses using their regular string values doesn’t work very well:

sorted(['192.168.1.1','192.168.2.1','192.168.11.1','192.168.12.1'])
['192.168.1.1', '192.168.11.1', '192.168.12.1', '192.168.2.1']

Good news is the solution isn’t difficult. Just use ip_address() to get the IP in integer format with a lambda:

import ipaddress

ips = ['192.168.1.1','192.168.2.1','192.168.11.1','192.168.12.1']

sorted(ips, key=lambda i: int(ipaddress.ip_address(i)))

Results in the following:

['192.168.1.1', '192.168.2.1', '192.168.11.1', '192.168.12.1']

In a more complex example where the data is in a list of dictionaries:

import ipaddress

ips = [
  {'address': "192.168.0.1"},
  {'address': "100.64.0.1"},
  {'address': "10.0.0.1"},
  {'address': "198.18.0.1"},
  {'address': "172.16.0.1"},
]

addresses = sorted(ips, key=lambda x: int(ipaddress.ip_address(x['address'])))

print(addresses)

Results in the following:

[{'address': '10.0.0.1'}, {'address': '100.64.0.1'}, {'address': '172.16.0.1'}, {'address': '192.168.0.1'}, {'address': '198.18.0.1'}]

To get the IPs with highest first, just add reverse=True to the sorted() call

A weird, ugly Error message when running google_ha_test.py

[Expert@cp-member-a:0]# $FWDIR/scripts/google_ha_test.py
GCP HA TESTER: started
GCP HA TESTER: checking access scopes...
GCP HA TESTER: ERROR 

Expecting value: line 1 column 1 (char 0)

Got this message when trying to test a CheckPoint R81.10 cluster build in a new environment. Obviously, this error message is not at all helpful in determining what the problem is. So I wrote a little debug script to try and isolate the issue:

import traceback
import gcp as _gcp 

global api
api = _gcp.GCP('IAM', max_time=20)
metadata = api.metadata()[0]

project = metadata['project']['projectId']
zone = metadata['instance']['zone'].split('/')[-1]
name = metadata['instance']['name']

print("Got metadata: project = {}, zone = {}, name = {}\n".format(project, zone, name))
path = "/projects/{}/zones/{}/instances/{}".format(project, zone, name)

try:
    head, res = api.rest("GET",path,query=None, body=None,aggregate=False)
except Exception as e:
    print(traceback.format_exc())

Running the script, I now see an exception when trying to make the initial API call:

[Expert@cp-cluster-member-a:0]# cd $FWDIR/scripts
[Expert@cp-cluster-member-a:0]# python3 ./debug.py

Got metadata: project = myproject, zone = us-central1-b, name = cp-member-a

Traceback (most recent call last):
  File "debug.py", line 18, in <module>
    head, res = api.rest(method,path,query=None,body=None,aggregate=False)
  File "/opt/CPsuite-R81.10/fw1/scripts/gcp.py", line 327, in rest
    max_time=self.max_time, proxy=self.proxy)
  File "/opt/CPsuite-R81.10/fw1/scripts/gcp.py", line 139, in http
    headers['_code']), headers, repr(response))
gcp.HTTPException: Unexpected HTTP code: 403

This at least indicates the connection to the API is OK and it’s some type of permissions issue with the account.

The CheckPoints have always been really tough to troubleshoot in this aspect, so to keep it simple, I deploy them with the default service account for the project. It’s not explicitly called out

I was able to re-enabled Editor permissions for the default service account with this Terraform code:

# Set Project ID via input variable
variable "project_id" {
  description = "GCP Project ID"
  type = string
}
# Get the default service account info for this project
data "google_compute_default_service_account" "default" {
  project = var.project_id
}
# Enable editor role for this service account
resource "google_project_iam_member" "default_service_account_editor" {
  project = var.project_id
  member  = "serviceAccount:${data.google_compute_default_service_account.default.email}"
  role    = "roles/editor"
}

Making Async Calls to Google Cloud Storage

I have a script doing real-time log analysis, where about 25 log files are stored in a Google Cloud Storage bucket. The files are always small (1-5 MB each) but the script was taking over 10 seconds to run, resulting in slow page load times and poor user experience. Performance analysis showed that most of the time was spent on the storage calls, with high overhead of requesting individual files.

I started thinking the best way to improve performance was to make the storage calls in an async fashion so as to download the files in parallel. This would require a special library capable of making such calls; after lots of Googling and trial and error I found a StackOverFlow post which mentioned gcloud AIO Storage. This worked very well, and after implementation I’m seeing a 125% speed improvement!

Here’s a rundown of the steps I did to get async working with GCS.)

1) Install gcloud AIO Storage:

pip install gcloud-aio-storage

2) In the Python code, start with some imports

import asyncio
from gcloud.aio.auth import Token
from gcloud.aio.storage import Storage

3) Create a function to read multiples from the same bucket:

async def IngestLogs(bucket_name, file_names, key_file = None):

    SCOPES = ["https://www.googleapis.com/auth/cloud-platform.read-only"]
    token = Token(service_file=key_file, scopes=SCOPES)
            
    async with Storage(token=token) as client:
        tasks = (client.download(bucket_name, _) for _ in file_names)
        blobs = await asyncio.gather(*tasks)
    await token.close()
                
    return blobs

It’s important to note that ‘blobs’ will be a list, with each element representing a binary version of the file.

4) Create some code to call the async function. The decode() function will convert each blob to a string.


def main():

    bucket_name = "my-gcs-bucket"
    file_names = {
       'file1': "path1/file1.abc",
       'file2': "path2/file2.def",
       'file3': "path3/file3.ghi",
    }
    key = "myproject-123456-mykey.json" 

    blobs = asyncio.run(IngestLogs(bucket_name, file_names.values(), key_file=key))

    for blob in blobs:
        # Print the first line from each blob
        print(blob.decode('utf-8')[0:75])

I track the load times via NewRelic synthetics, and it showed a 300% performance improvement!

Using GCP Python SDK for Network Tasks

Last week, I finally got around to hitting the GCP API directly using Python. It’s pretty easy to do in hindsight. Steps are below


If not done already, install PIP. On Debian 10, the command is this:

sudo apt install python3-pip

Then of course install the Python packages for GCP:

sudo pip3 install google-api-python-client google-cloud-storage

Now you’re ready to write some Python code. Start with a couple imports:

#!/usr/bin/env python3 

from googleapiclient import discovery
from google.oauth2 import service_account

By default, the default compute service account for the VM or AppEngine will be used for authentication. Alternately, a service account can be specific with the key’s JSON file:

KEY_FILE = '../mykey.json'
creds = service_account.Credentials.from_service_account_file(KEY_FILE)

Connecting to the Compute API will look like this. If using the default service account, the ‘credentials’ argument is not required.

resource_object = discovery.build('compute', 'v1', credentials=creds)

All API calls require the project ID (not name) be provided as a parameter. I will set it like this:

PROJECT_ID = "myproject-1234"

With the connection to the API established, you can now run some commands. The resource object will have several methods, and in each there will typically be a list() method to list the items in the project. The execute() at the end is required to actually execute the call.

_ = resource_object.firewalls().list(project=PROJECT_ID).execute()

It’s important to note the list().execute() returns a dictionary. The actual list of items can be found in key ‘items’. I’ll use the get() method to retrieve the values for the ‘items’ key, or use an empty list if ‘items’ doesn’t exist. Here’s an example

firewall_rules = _.get('items', [])
print(len(firewall_rules), "firewall rules in project", PROJECT_ID)
for firewall_rule in firewall_rules:
    print(" -", firewall_rule['name'])

The API reference guide has a complete list of everything that’s available. Here’s some examples:

firewalls() - List firewall rules
globalAddresses() - List all global addresses
healthChecks() - List load balancer health checks
subnetworks() - List subnets within a given region
vpnTunnels() - List configured VPN tunnels

Some calls will require the region name as a parameter. To get a list of all regions, this can be done:

_ = resource_object.regions().list(project=PROJECT_ID).execute()
regions = [region['name'] for region in _.get('items', [])]

Then iterate through each region. For example to list all subnets:

for region in regions:
    _ = resource_object.subnetworks().list(project=PROJECT_ID,region=region).execute()
    print("Reading subnets for region", region ,"...")
    subnets = _.get('items', [])
    for subnet in subnets:
        print(" -", subnet['name'], subnet['ipCidrRange'])

Getting web server variables and query parameters in different Python Frameworks

As I explore different ways of doing Web programming in Python via different Frameworks, I kept finding the need to examine HTTP server variables, specifically the server hostname, path, and query string. The method to do this varies quite a bit by framework.

Given the following URL: http://www.mydomain.com:8000/derp/test.py?name=Harry&occupation=Hit%20Man

I want to create the following variables with the following values:

  • server_host is ‘www.mydomain.com’
  • server_port is 8000
  • path is ‘/derp/test.py’
  • query_params is this dictionary: {‘name’: ‘Harry’, ‘occupation’: ‘Hit Man’}

Old School CGI

cgi.FieldStorage() is the easy way to do this, but it returns a list of tuples, and must be converted to a dictionary.

#!/usr/bin/env python3

if __name__ == "__main__":

    import os, cgi

    server_host = os.environ.get('HTTP_HOST', 'localhost')
    server_port = os.environ.get('SERVER_PORT', 80)
    path = os.environ.get('SCRIPT_URL', '/')
    query_params = {}
    _ = cgi.FieldStorage()
    for key in _:
        query_params[key] = str(_[key].value)

Note this will convert all values to strings. By default, cgi.FieldStorage() create numberic values as int or float.

WSGI

Similar to CGI, but environment variables get passed simply in a dictionary as the first parameter. There is no need to load the os module.

def application(environ, start_response):

    from urllib import parse

    server_host = environ.get('HTTP_HOST', 'localhost')
    server_port = environ.get('SERVER_PORT', 80)
    path = environ.get('SCRIPT_URL', '/')
    query_params = {}
    if '?' in environ.get('REQUEST_URI', '/'):
        query_params = dict(parse.parse_qsl(parse.urlsplit(environ['REQUEST_URI']).query))

Since the CGI Headers don’t exist, urllib.parse can be used to analyze the REQUEST_URI environment variable in order to form the dictionary.

Flask

Flask makes this very easy. The only real trick comes with path; the ‘/’ gets removed, so it must be re-added

from flask import Flask, request

app = Flask(__name__)

# Route all possible paths here
@app.route("/", defaults={"path": ""})
@app.route('/<string:path>')
@app.route("/<path:path>")

def index(path):
      
    [server_host, server_port] = request.host.split(':')
    path =  "/" + path
    query_params = request.args
 

FastAPI

This one’s a slightly different because the main variable to examine actually a QueryParams object with is a form of MultiDict

from fastapi import FastAPI, Request

app = FastAPI()

# Route all possible paths here
@app.get("/")
@app.get("/{path:path}")
def root(path, req: Request):

    [server_host, server_port] = req.headers['host'].split(':')
    path = "/" + path
    query_params = dict(req.query_params)

AWS Lambda

Lambda presents a dictionary called ‘event’ to the handler and it’s simply a matter of grabbing the right keys:

def lambda_handler(event, context):

    server_host = event['headers']['host']
    server_port = event['headers']['X-Forwarded-Port']
    path = event['path']
    query_params = event['queryStringParameters']

If multiValueheaders are enabled, some of the variables come in as lists, which in turn may have a list as values, even if there’s only one item.

    server_host = event['multiValueHeaders']['host'][0]
    query_params = {}
    for _ in event["multiValueQueryStringParameters"].items():
        query_params[_[0]] = _[1][0]

Getting started with Flask and deploying Python apps to Google App Engine

Installing Flask on Linux or Mac

On Debian 10 or Ubuntu 20:

sudo pip3 install flask flask-cors

On Mac or FreeBSD:

sudo pip install flask flask-cors

Creating a basic flask app:

#!/usr/bin/env python3

from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route("/", defaults = {'path': ""})
@app.route("/<string:path>")
@app.route("/<path:path>")

def index(path):
    req_info = {
        'host': request.host.split(':')[0],
        'path': "/" + path,
        'query_string': request.args,
        'remote_addr': request.environ.get('HTTP_X_REAL_IP', request.remote_addr),
        'user_agent': request.user_agent.string
    }
    return jsonify(req_info)

if __name__ == '__main__':
    app.run()

Run the app

chmod u+x main.py
./main.py
Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)

Do a test curl against it

$ curl -v "http://localhost:5000/oh/snap?x=1&x=2"

< HTTP/1.0 200 OK
< Content-Type: application/json
< Content-Length: 65
< Access-Control-Allow-Origin: *
< Server: Werkzeug/1.0.1 Python/3.7.8
< Date: Wed, 21 Apr 2021 17:07:58 GMT
<
{"host":"localhost","path":"/oh/snap","query_string":{"x":"1"},"remote_addr":"127.0.0.1","user_agent":"curl/7.72.0"}

Deploying to Google Cloud App Engine

Create a requirements.txt file:

echo "flask" > requirements.txt

Create an app.yaml file:

printf "runtime: python38\nenv: standard\n" > app.yaml 

Now deploy the app to Google using the gCloud command:

gcloud app deploy