Here’s an outline of the process
- Launch a new R81.10 VM and create /var/log/mdss.json with the hostname and new IP address
- On the old R80.40 VM, perform an export (this will result in services being stopped for ~ 15 minutes)
- On the new R81.10 VM, perform an import. This will take about 30 minutes
- If using BYOL, re-issue the license with the new IP address
Performing Export on old R80.40 Server
On the old R80.40 server, in GAIA, navigate to Maintenance -> System Backups. If not done already, run a backup. This will give a rough idea of how long the export job will take and the approximate file size including logs.
So for me, the export size can be assumed to be just under 1.2 GB. Then go to CLI and enter expert mode. First, run migrate_server verify
expert cd $FWDIR/scripts ./migrate_server verify -v R81.10 The verify operation finished successfully.
Now actually do the export. Mine took about 15 minutes and resulted in 1.1 GB file when including logs.
./migrate_server export -v R81.10 -l /var/log/export.tgz The export operation will eventually stop all Check Point services (cpstop; cpwd_admin kill). Do you want to continue (yes/no) [n]? yes Exporting the Management Database Operation started at Thu Jan 5 16:20:33 UTC 2023 [==================================================] 100% Done The export operation completed successfully. Do you wish to start Check Point services (yes/no) [y]? y Starting Check Point services ... The export operation finished successfully. Exported data to: /var/log/export.tgz.
Then copy the image to something offsite using SCP or SFTP.
ls -la /var/log/export.tgz -rw-rw---- 1 admin root 1125166179 Jan 5 17:36 /var/log/export.tgz scp /var/log/export.tgz email@example.com:
Setting up the new R81.10 Server
After launching the VM, SSH in and set an admin user password and expert mode password. Then save config:
set user admin password set expert-password save config
Login to the Web GUI and start the setup wizard. This is pretty must just clicking through a bunch of “Next” buttons. It is recommend to enable NTP though and uncheck “Gateway” if this is a management-only server.
When the setup wizard has concluded, download and install SmartConsole, then the latest Hotfix
One rebooted, login via CLI, go to expert mode, and create a /var/log/mdss.json file that has the name of the Management server (as it appears in SmartConsole) and the new server’s internal IP address. Mine looks like this:
It’s not a bad idea to paste this in to a JSON Validator to ensure the syntax is proper. Also note the square outer brackets, even though there’s only one entry in the array.
Importing the Database
Now we’re ready to copy the exported file from the R80.40 server. /var/log typically has the most room, so that’s a good location. Then run the import command. For me, this took around 20-30 minutes.
scp firstname.lastname@example.org:export.tgz /var/log/ cd $FWDIR/scripts ./migrate_server import -v R81.10 -l /var/log/export.tgz Importing the Management Database Operation started at Thu Jan 5 16:51:22 GMT 2023 The import operation finished successfully.
If a “Failed to import” message appears, check the /var/log/mdss.json file again. Make sure the brackets, quotes, commas, and colons are in the proper place.
After giving the new server a reboot for good measure, login to CLI and verify services are up and running. Note it takes 2-3 minutes for the services to be fully running:
cd $FWDIR/scripts ./cpm_status.sh Check Point Security Management Server is during initialization ./cpm_status.sh Check Point Security Management Server is running and ready
I then tried to login via R81.10 SmartConsole and got this message:
This is expected. The /var/log/mdss.json only manages the connection to the gateways, it doesn’t have anything to do with licensing for the management server itself. And, I would guess that doing the import results in the 14 day trial license being overridden. Just to confirm that theory, I launched a PAYG VM, re-did the migration, and no longer saw this error.
Updating the Management Server License
Login to User Center -> Assets/Info -> Product Center, locate the license, change the IP address, and install the new license. Since SmartConsole won’t load, this must be done via CLI.
cplic put 10.22.33.44 never XXXXXXX
I then gave a reboot and waited 2-3 minutes for services to fully start. At this point, I was able to login to SmartConsole and see the gateways, but they all showed red. This is also expected – to make them green, policy must be installed.
I first did a database install for the management server itself (Menu -> Install Database), which was successful. Then tried a policy install on the gateways and got a surprise – the policy push failed, complaining of
From the Management Server, I tried a basic telnet test for port 18191 and it did indeed fail:
telnet 10.22.33.121 18191 Trying 10.22.33.121..
At first I thought the issue was firewall rules, but concluded that the port 18191 traffic was reaching the gateway but being rejected, which indicates a SIC issue. Sure enough, a quick Google pointed me to this:
Policy installation fails with “TCP connection failure port=18191“
Indeed, the CheckPoint deployment template for GCP uses “member-a” and “member-b” as the hostname suffix for the gateways, but we give them a slightly different name in order to be consistent with our internal naming scheme.
The fix is change the hostname in the CLI to match the gateway name configured in SmartConsole:
cp-cluster-member-a> set hostname newhostname cp-cluster-member-01> set domainname mydomain.org cp-cluster-member-01> save config
After that, the telnet test to port 18191 was successful, and SmartConsole indicated some communication:
Now I have to reset SIC on both gateways:
cp-cluster-member-01> cpconfig This program will let you re-configure your Check Point products configuration. Configuration Options: ---------------------- (1) Licenses and contracts (2) SNMP Extension (3) PKCS#11 Token (4) Random Pool (5) Secure Internal Communication (6) Disable cluster membership for this gateway (7) Enable Check Point Per Virtual System State (8) Enable Check Point ClusterXL for Bridge Active/Standby (9) Hyper-Threading (10) Check Point CoreXL (11) Automatic start of Check Point Products (12) Exit Enter your choice (1-12) :5 Configuring Secure Internal Communication... ============================================ The Secure Internal Communication is used for authentication between Check Point components Trust State: Trust established Would you like re-initialize communication? (y/n) [n] ? y Note: The Secure Internal Communication will be reset now, and all Check Point Services will be stopped (cpstop). No communication will be possible until you reset and re-initialize the communication properly! Are you sure? (y/n) [n] ? y Enter Activation Key: Retype Activation Key: initial_module: Compiled OK. initial_module: Compiled OK. Hardening OS Security: Initial policy will be applied until the first policy is installed The Secure Internal Communication was successfully initialized Configuration Options: ---------------------- (1) Licenses and contracts (2) SNMP Extension (3) PKCS#11 Token (4) Random Pool (5) Secure Internal Communication (6) Disable cluster membership for this gateway (7) Enable Check Point Per Virtual System State (8) Enable Check Point ClusterXL for Bridge Active/Standby (9) Hyper-Threading (10) Check Point CoreXL (11) Automatic start of Check Point Products (12) Exit Enter your choice (1-12) :12 Thank You... cpwd_admin: Process AUTOUPDATER terminated cpwd_admin: Process DASERVICE terminated
The services will restart, which triggers a failover. At this point, I went in to Smart Console, edited the member, reset SIC, re-entered the key, and initialized. The policy pushes then were successful and everything was green. The last remaining issue was an older R80.30 cluster complaining of the IDS module not responding. This resolved itself the next day.