Python3 CGI Script Execution on FreeBSD & Apache: env: python3: No such file or directory

My new Synology 218+ supports virtual machines, but with only 2 GB of RAM pre-installed, it only left ~200 MB available. This isn’t enough for any newish version of Linux, but it is adequate for a very basic FreeBSD Apache web server. I was able to create the VM and install Apache, python3, and several python libraries no problem, but to issues trying to get Python CGI scripts working. I had already enabled CGI by creating the file /usr/local/etc/apache24/Includes/cgi.conf and restarting Apache:

LoadModule cgi_module libexec/apache24/mod_cgi.so

<Directory "/usr/local/www/apache24/cgi-bin">
    AllowOverride None
    Options +ExecCGI
    AddHandler cgi-script .cgi
    Require all granted
</Directory>

Then created a very simple Python3 CGI script:

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import os
print("Status: 200\nContent-Type: text/plain; charset=UTF-8\n")
print("PATH =", os.environ['PATH'])

The 500 error would show up in the apache logs like this:

[Tue Sep 01 11:45:45.476977 2020] [cgi:error] [pid 1126] [client 192.168.1.100:47820] AH01215: env: python3: No such file or directory: /usr/local/www/apache24/cgi-bin/python3.cgi
[Tue Sep 01 11:45:45.477145 2020] [cgi:error] [pid 1126] [client 192.168.249.197:47820] End of script output before headers: python3.cgi

Almost seems like the ‘env’ was the problem. Yet running from CLI, it operated fine:

root@:/usr/local/www/apache24/cgi-bin # ./python3.cgi 
Status: 200
Content-Type: text/plain; charset=UTF-8

PATH = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/root/bin

In desperation, I tried changing the first line to explicitly reference the python3 binary’s location:

#!/usr/local/bin/python3

The script then started returning 200s, and showed the problem: the PATH only contained /bin and /usr/bin, but, on FreeBSD, python3 is installed in /usr/local/bin since it’s installed via package:

root@:/usr/local/www/apache24/cgi-bin # whereis python3
python3: /usr/local/bin/python3 /usr/ports/lang/python3

I didn’t want to change the first line of all my CGI scripts and cause them to break in Linux. So the better fix was tell Apache to look in /usr/local/bin as part of the path by adding this line to any of the config files:

SetEnv PATH /bin:/usr/bin:/usr/local/bin

I did not see any need to have /sbin, /usr/sbin, or /usr/local/sbin in the path since there are no valid interpreters in these paths. But having them in there doesn’t hurt anything.

Advertisement

“No Vlan association for STP Interface Member 1.0” when upgrading F5 BigIP in AWS from 13.1.1 to 13.1.3.2

After upgrading several of our AWS Bigip-VEs in AWS from 13.1.1 to 13.1.3.2 without issue, I had big problems with a pair this afternoon.  The first one took forever to boot up, and when it did, was complaining about incomplete configuration.

I’ve seen this before and know it usually means it couldn’t migrate a certain section of the configuration file, and rather than ignoring it, it just can’t load anything.  This is what was showing up in /var/log/ltm and on console:

May 4 17:47:53 bigip warning mcpd[18134]: 01070932:4: Pending local Interface from cluster.: 1.0, configuration ignored
May 4 17:47:53 bigip warning mcpd[18134]: 01070932:4: Pending Interface: 1.0, configuration ignored
May 4 17:47:53 bigip err mcpd[18134]: 01070523:3: No Vlan association for STP Interface Member 1.0.
May 4 17:47:53 bigip emerg load_config_files: "/usr/bin/tmsh -n -g load sys config partitions all base " - failed. -- 01070523:3: No Vlan association for STP Interface Member 1.0. Error: failed to reset strict operations; disconnecting from mcpd. Will reconnect on next command.

The problem was the original 13.1.1 configuration file had this:

net interface 1.0 {

media-fixed 10000T-FD

}

This is really an error from the get-go, since interface 1.0 is the eth0 / management interface and shouldn’t be in the “net interface” section.

My work around was to reset to factory config, then re-create Self IPs and re-sync the cluster.  Alternately, the configuration file could be modified to simply remove the offending lines.

Since I did not see this configuration in any other F5 BigIP-VEs, I’d suspect it was mistakenly inserted in to the 13.1.1-0.0.4 AMI by F5 that I’d launched last summer.

Using AWS S3 Storage from Linux CLI

Start by installing aws-shell, then run the configure command to enter key and region information:

sudo apt install aws-shell
aws configure

To list files in a bucket called ‘mybucket’:

aws s3 ls s3://mybucket

To upload a single file:

aws s3 cp /tmp/myfile.txt s3://mybucket/

To upload all files in a directory with a certain extension:

aws s3 cp /tmp/ s3://mybucket/ --recursive --exclude '*' --include '*.txt'

To recursively upload contents of a directory:

aws s3 cp /tmp/mydir/ s3://mybucket/ --recursive

To delete a single file:

aws s3 rm s3://mybucket/myfile.text

To empty a bucket (delete all files, but keep bucket):

aws s3 rm s3://mybucket --recursive

 

Upping the IPv4 Unicast Route Limit on a Nexus 93180YC-EX

We attempted to load a partial route table from CenturyLink (aka Level3) on a Cisco Nexus 93180YC-EX and found the switch threw IPFIB-SLOT1-2-UFIB_ROUTE_CREATE error messages starting at around 200,000 routes:

IPFIB-SLOT1-2-UFIB_ROUTE_CREATE: Unicast route create failed for INS unit 0, VRF: 9, 202.153.28.0/24, flags:0x0, intf:0xd001a, Error: FIB TCAM FULL For IP Routes(1129381967)
IPFIB-SLOT1-2-UFIB_ROUTE_CREATE: Unicast route create failed for INS unit 0, VRF: 9, 202.153.27.0/24, flags:0x0, intf:0xd001a, Error: FIB TCAM FULL For IP Routes(1129381967)
IPFIB-SLOT1-2-UFIB_ROUTE_CREATE: Unicast route create failed for INS unit 0, VRF: 9, 202.153.26.0/24, flags:0x0, intf:0xd001a, Error: FIB TCAM FULL For IP Routes(1129381967)

This command shed some insight on the problem:

MySwitch# show vdc MySwitch resource

Resource                   Min       Max       Used      Unused    Avail   
--------                   ---       ---       ----      ------    -----   
vlan                       16        4094      45        0         4049    
vrf                        2         4096      9         0         4087    
port-channel               0         511       14        0         497     
u4route-mem                248       248       2         246       246     
u6route-mem                96        96        1         95        95      
m4route-mem                58        58        1         57        57      
m6route-mem                8         8         1         7         7

So by default,  only 248 MB of the switch’s TCAM is allocated to IPv4 unicast routes.  This means in a typical 2 ISP deployment, it won’t be able to handle more than a couple hundred thousand routes.

In cases where the desired IPv4 route table exceeds this, a different template such as internet-peering can be set

MySwitch(config)# system routing ?
template-dual-stack-host-scale Dual Stack Host Scale
template-internet-peering Internet Peering
template-lpm-heavy LPM Heavy
template-mpls-heavy MPLS Heavy Scale
template-multicast-heavy Multicast Heavy Scale

This requires a reboot and will show a scary message about disabling multicast routing:

MySwitch(config)# system routing template-internet-peering 
Warning: The command will take effect after next reload.
Multicast is not supported in this profile
Increase the LPM scale by resetting multicast LPM max-scale to 0 using below CLI
hardware profile multicast max-limit lpm-entries 0
Note: This requires copy running-config to startup-config before switch reload.

After the reboot, the memory can be carved out to a larger amount

MySwitch(config)#vdc MySwitch
MySwitch(config-vdc)#limit-resource u4route-mem minimum 256 maximum 512
MySwitch(config)#exit

And now we have more TCAM allocated to IPv4 unicast routes:

MySwitch# show vdc MySwitch resource 

Resource                   Min       Max       Used      Unused    Avail   
--------                   ---       ---       ----      ------    -----   
vlan                       16        4094      45        0         4049    
vrf                        2         4096      9         0         4087    
port-channel               0         511       14        0         497     
u4route-mem                512       512       2         510       510     
u6route-mem                96        96        1         95        95      
m4route-mem                58        58        1         57        57      
m6route-mem                8         8         1         7         7

And now we’re able to take about 286k routes from CenturyLink no problem:

Neighbor   V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
4.15.16.9  4  3356  122187     289   297236    0    0 00:04:55 286770

Now the part I still don’t understand is there’s still the same amount of resources allocated to IPv6 unicast and multicast routes.  It’s also not totally clear what the total TCAM memory amount is, but I would assume 1-2 GB.

 

 

Upgrading Checkpoint Management Server in AWS from R80.20 to R80.30

Unfortunately it is not possible to simply upgrade an existing CheckPoint management server in AWS.  A new one must be built, with the database manually exported from the old instance and imported to the new one.

There is a CheckPoint Knowledge base article, but I found it to have several errors and also be confusing on which version of tools should be used.

Below is the process I used to go from R80.20 to R80.30

Login to the old R80.20 server.  Download and extract the R80.30 tools:

cd /home/admin
tar -zxvf Check_Point_R80.30_Gaia_SecurePlatform_above_R75.40_Migration_tools.tgz

Run the export job to create an archive of the database:

./migrate export --exclude-licenses /tmp/R8020Backup.tgz

Copy this .tgz file to the new R80.30 management server in /tmp

On the new management server, run the import job:

cd $FWDIR/bin/upgrade_tools
./migrate import /tmp/R8020Backup.tgz 
The import operation will eventually stop all Check Point services (cpstop)
Do you want to continue? (y/n) [n]? y

After a few minutes, the operation will complete and you’ll be prompted to start services again.

Finish by upgrading SmartConsole to R80.30 and connect to the new R80.30 server.  I’ve noticed it to be very slow, but it will eventually connect and all the old gateways and policies will be there.

Cisco IOS-XE SCP Server with RADIUS authentication

I’ve been wanting to try out SCP to copy IOS images to routers for a while, as I figured it would be faster and cleaner than FTP/TFTP.  There’s essentially three tricks to getting it working..

  1. Having the correct AAA permissions
  2. Understanding the SCP syntax and file systems
  3. Making the scp command from the router VRF aware, if required
  4. 16.6.7 or 16.9.4 or newer code.  Performance on older IOS-XE versions is terrible

First, SSH has to be enabled and of course the SCP server must be activated

ip ssh version 2
ip scp server enable

After doing so, verify the router is accessible via SSH.  If not, try generating a fresh key:

Router(config)#crypto key generate rsa modulus 2048

Now on to the AAA configuration.  The important step is have accounts automatically go to their privilege level 15 without manually entering enable mode.  This is done with the “aaa authorization exec” command:

aaa new-model
!
username admin privilege 15 password 7 XXXXXXX
!
aaa group server radius MyRadiusServer
 server-private 10.1.1.100 auth-port 1812 acct-port 1813 key 7 XXXXXXXX
 ip vrf forwarding MyVRF
!
aaa authentication login default local group MyRadiusServer
aaa authentication enable default none
aaa authorization config-commands
aaa authorization exec default local group MyRadiusServer if-authenticated

The RADIUS server will also need this vendor-specific attribute in the policy:

Vendor: Cisco
Name: Cisco-AV-Pair
Value: priv-lvl=15

If I SSH to the router using a RADIUS account, I should automatically see enable mode:

$ ssh billy@10.1.1.1
Password: 
Router#show privilege
Current privilege level is 15

I can now upload IOS images to a router with IP address 10.1.1.1 like this:

scp csr1000v-universalk9.16.06.06.SPA.bin billy@10.1.1.1:bootflash:/csr1000v-universalk9.16.06.06.SPA.bin

If copying images from the router where the egress interface is on a VRF, the source interface must be specified:

ip ssh source-interface GigabitEthernet0

And simply use the IOS copy command:

copy scp://billy@10.1.1.2:/csr1000v-universalk9.16.06.06.SPA.bin bootflash:

Note scp’s performance in IOS-XE 16.6.5, was very poor, but excellent in 16.6.7 and 16.9.4

IKEv2 VPNs to AWS on Cisco IOS devices

I hadn’t worked with AWS VPNs since January and missed their announcement of supporting IKEv2.  The configuration is similar to GCP with the one exception of SA lifetime.  AWS appears to still use 8 hours (28800 seconds) as opposed to GCP’s 10 hours and the Cisco IOS default of 24 hours (86400 seconds)

Configuring an IKEv2 VPN to AWS

Create IKEv2 Proposal  and Policy, if not done already:

crypto ikev2 proposal IKEV2_PROPOSAL 
  encryption aes-cbc-256 aes-cbc-128    
  integrity sha512 sha384 sha256 sha1   
  group 16 14 2                       ! 16 = 4096, 14 = 2048, 2 = 1024 bit
crypto ikev2 policy IKEV2_POLICY 
  match fvrf any
  proposal IKEV2_PROPOSAL
!

Add entries to an existing keyring, or create a separate one

crypto ikev2 keyring AWS_IKEV2_KEYRING
 peer vpn-0d4fe4b8d9406518f
  address 13.52.37.68
  pre-shared-key XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Create IKEv2 Profile that links to said Keyring.  If behind NAT, specify public IP:

crypto ikev2 profile AWS_IKEV2_PROFILE
 match identity remote address 0.0.0.0 
 identity local address 203.0.113.222    ! Public IP, if behind NAT
 authentication local pre-share
 authentication remote pre-share
 keyring local AWS_IKEV2_KEYRING
 lifetime 28800
!

The IPSec parameters are same as IKEv1, except IKEv2 profile is added:

crypto ipsec transform-set ESP_AES128_SHA esp-aes esp-sha-hmac 
 mode tunnel
crypto ipsec profile AWS_IPSEC_PROFILE
 set security-association lifetime kilobytes disable
 set transform-set ESP_AES128_SHA 
 set pfs group2
 set ikev2-profile AWS_IKEV2_PROFILE

Finally, apply IPsec profile to the VTI with the internet-facing interface as source:

interface Tunnel1
 ip address 169.254.231.142 255.255.255.252
 ip virtual-reassembly in
 ip tcp adjust-mss 1379
 tunnel source GigabitEthernet0/0
 tunnel mode ipsec ipv4
 tunnel destination 13.52.37.68
 tunnel protection ipsec profile AWS_IPSEC_PROFILE

It was good to see them negotiate a nice strong AES-256 / SHA256 / Group16 (4096-bit) SA:

Router#show crypto ikev2 sa
Tunnel-id Local Remote fvrf/ivrf Status 
5 192.168.1.123/4500 35.52.37.68/4500 none/none READY 
Encr: AES-CBC, keysize: 256, PRF: SHA256, Hash: SHA256, DH Grp:16, Auth sign: PSK, Auth verify: PSK
Life/Active Time: 86400/1053 sec

A note about Cisco IOS software versions…

I’ve tested this configuration on a 1921 ISR G2 running IOS 15.5(3)M10

It seems Cisco introduced a slew of bugs relating to IKEv2 for both IOS and IOS-XE in mid-2019:

  • CSCvh66033 – IKEv2 – Crash with segmentation fault when debugs crypto ikev2 are enabled
  • CSCve08418IPsec/IKEv2 Installation Sometimes Fails With Simultaneous Negotiations
  • CSCvd69373 – IKEv2: Unable to initiate IKE session to a specific peer due to ‘in-neg’ SA Leak
  • CSCvg15158 – DMVPN session get stuck in NHRP and UP-NO-IKE state without active IKEv2 session until rekey
So, upgrading the latest software version is highly recommended.

 

CheckPoint Initial Configuration via CLI

The default credentials are admin/admin

Verify the management interface

show management interface

Set the management interface with IP address 192.168.1.222/255.255.255.0

set interface Mgmt ipv4-address 192.168.1.222 mask-length 24

Verify IP address for management interface

show interface Mgmt ipv4-address

Ping something

ping 192.168.1.1

Set the default route to 192.168.1.1

set static-route default nexthop gateway address 192.168.1.1 priority 1 on

Create internal route for 10.0.0.0/8 via gateway 10.10.10.10

set static-route 10.0.0.0/8 nexthop gateway address 10.10.10.1 on

Verify routing

show route

Set DNS servers

set dns primary 8.8.8.8
set dns secondary 9.9.9.9

Save the configuration

save config

Show all interface

show interfaces

Show interfaces with IP addresses configured

show security-gateway monitored-interfaces

Create an 802.3ad (LACP) bonded logical interface with eth1 & eth2 as physical members

add bonding group 1
set bonding group 1 mode 8023AD
set bonding group 1 lacp-rate fast
add bonding group 1 interface eth1
add bonding group 1 interface eth2

Create a VLAN sub-interface on bond1 with 802.1q tag 123

add interface bond1 vlan 123

Check software version

show version all

Get hardware information and serial number

show asset system

Change admin password

set user admin password

Set expert mode password

set expert-password

Check policy Status

fw stat

Clear the current local policy

fw unloadlocal

Check site-to-site VPN status

vpn tu tlist

Reset VPN tunnels (list/delete IKE/IPSec SAs)

vpn tu

Modify license, configure SNMP, reset SIC connection:

cpconfig

Verify number of CPUs

fw ctl multik stat

View CPU to connection distribution table

fw ctl affinity -l -r

Reboot the firewall

reboot

SYSMGR-2-CFGWRITE_ABORTED_CONFELEMENT_RETRIES

15 minutes on a Cisco Nexus 9k and already found a cute bug.

SYSMGR-2-CFGWRITE_ABORTED_CONFELEMENT_RETRIES: Copy R S failed as config-failure retries are ongoing. Type "show nxapi retries" for checking the ongoing retries
# show nxapi retries
#1. Dn: sys/vpc/inst/dom/keepalive, Operation: modify, Src: vpc bmp: 0x4.

Quickest stupid work-around I could find:

# copy running-config bootflash:latestconfig.txt
# copy bootflash:latestconfig.txt startup-config 
# reload

And there’s a similar looking bug:

# copy run start
[########################################] 100%
Configuration update aborted: request was aborted

 

Authentication to Synology Directory Server (LDAP Server)

Upon configuring Directory Server the Synology will provide something like this:

The password configured is password for the ‘root’ user

Configuration for Cisco ASA / AnyConnect

aaa-server SYNOLOGY protocol ldap
aaa-server SYNOLOGY (Inside) host 192.168.1.100
 ldap-base-dn dc=myserver,dc=mydomain,dc=com
 ldap-scope subtree
 ldap-naming-attribute uid
 ldap-login-password <root user password>
 ldap-login-dn uid=root,cn=users,dc=myserver,dc=mydomain,dc=com
 server-type auto-detect

Configuration for FortiGate GUI

  • Common Name Identifier = uid
  • Distinguished Name = cn=users,dc=myserver,dc=mydomain,dc=com
  • Bind Type = Simple

Configuration for F5 BigIP

Need to change Authentication from ‘Basic’ to ‘Advanced’ to set Login LDAP attribute

  • Remote Directory Tree: dc=myserver,dc=mydomain,dc=com
  • Scope: Sub
  • BIND DN: uid=root,cn=users,dc=myserver,dc=mydomain,dc=com
  • Password: <root user password>
  • User Template: uid=%s,cn=users,dc=myserver,dc=mydomain,dc=com
  • Login LDAP Attribute: uid

To use Remote Role Groups:

Attribute String: memberOf=cn=users,cn=groups,dc=myserver,dc=mydomain,dc=com