Configure Squid for HTTPS on Debian VM

Verify we’re running the latest version of Debian

lsb_release -a
No LSB modules are available.
Distributor ID:	Debian
Description:	Debian GNU/Linux 11 (bullseye)
Release:	11
Codename:	bullseye

Become root

sudo su

Update packages

apt update && apt upgrade -y

Install the Squid package that has openssl configured and enabled

apt install squid-openssl

Create a local CA, using a 4096-bit key and SHA-2 hashing. This one is good for the next 10 years

openssl req -new -newkey rsa:4096 -sha256 -days 3653 -nodes -x509 -keyout /etc/squid/CA.key -out /etc/squid/CA.crt

Combine the key and cert in to a single file for convenience

cat CA.key CA.crt > CA.pem

Initialize the directory used for minted certs and set permissions so squid owns it

/usr/lib/squid/security_file_certgen -c -s /var/spool/squid/ssl_db -M 4MB
chown -R proxy:proxy /var/spool/squid

Finally, configure Squid to use HTTPS

http_port 3128 ssl-bump cert=/etc/squid/CA.pem generate-host-certificates=on options=NO_SSLv3
ssl_bump bump all

Restart Squid

service squid restart

Test connections by configuring 3128. Note the certificate from the CA, good for 10 years:

export https_proxy=http://localhost:3128

curl -v --cacert CA.crt  https://teapotme.com 

* Uses proxy env variable https_proxy == 'http://localhost:3128'
*   Trying ::1:3128...
* Connected to localhost (::1) port 3128 (#0)
* allocate connect buffer!
* Establish HTTP proxy tunnel to teapotme.com:443
> CONNECT teapotme.com:443 HTTP/1.1
> Host: teapotme.com:443
> User-Agent: curl/7.74.0
> Proxy-Connection: Keep-Alive
> 
< HTTP/1.1 200 Connection established
< 
* Proxy replied 200 to CONNECT request
* CONNECT phase completed!
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*  CAfile: CA.crt
*  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CONNECT phase completed!
* CONNECT phase completed!
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: CN=teapotme.com
*  start date: Nov  6 04:03:48 2022 GMT
*  expire date: Nov  6 04:03:48 2032 GMT
*  subjectAltName: host "teapotme.com" matched cert's "teapotme.com"
*  issuer: C=AU; ST=Some-State; O=Internet Widgits Pty Ltd; CN=localhost
*  SSL certificate verify ok.
> GET / HTTP/1.1
> Host: teapotme.com
> User-Agent: curl/7.74.0
> Accept: */*
> 
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Mark bundle as not supporting multiuse
< HTTP/1.1 418 I'm a teapot
< Server: nginx
< Date: Sun, 06 Nov 2022 04:08:13 GMT
< Content-Type: application/json
< Content-Length: 483
< X-Cache: MISS from test-1
< X-Cache-Lookup: MISS from test-1:3128
< Via: 1.1 test-1 (squid/4.13)
< Connection: keep-alive
< 
{
    "host": "teapotme.com",
    "user-agent": "curl/7.74.0",
    "x-forwarded-for": "::1, 35.233.234.155, 172.17.0.1",
    "x-forwarded-proto": "https",
}

Advertisement

Git clone / pull / push fails with ‘no mutual signature algorithm’ on Ubuntu 22 to GCP Cloud Source

I created a new Ubuntu 22 VM a few weeks ago and noticed when trying a git pull or git push to a GCP Cloud Source Repo, I wasn’t having any luck when using SSH:

cd myrepo/
git pull
myusername@myorg.com@source.developers.google.com: Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

The SSH key was a standard RSA with the public key uploaded to Cloud Source SSH Keys, so there was no obvious reason why it wasn’t working.

Next step was try and get some type of debug or error message as to why the public key exchange wasn’t working. Newer versions of Git can turn on SSH debugging by setting the GIT_SSH_COMMAND environment variable, so I did that:

export GIT_SSH_COMMAND="ssh -vvv"

When re-running the git pull request, I get some somewhat useful debugs back:

debug1: Authentications that can continue: publickey
debug3: start over, passed a different list publickey
debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive,password
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Offering public key: /home/j5/.ssh/id_rsa RSA SHA256:JBgC+R4Ozel+YI+7oEv1UOf9/jLqGBhysN8bpoEDbPU
debug1: send_pubkey_test: no mutual signature algorithm

The ‘no mutual signature algorithm’ indicated one side didn’t like the signing algorithm. I did a Google and found this article which indicates that Ubuntu 22 doesn’t allow RSA by default. I can’t change the setting on the Cloud Source side, so on the Ubuntu 22 client, I did this as a quick work-around:

echo "PubkeyAcceptedKeyTypes +ssh-rsa" > /etc/ssh/ssh_config.d/enable_rsa.conf

And now the git pull/push works without issue.

An alternate solution is instead use Elliptic Curve DSA rather than RSA. To generate a new ECDSA key:

ssh-keygen -t ecdsa
cat ~/.ssh/id_ecdsa.pub

Then copy/paste the key in to the SSH Key Manager. This will be easier to copy/paste then RSA since it’s shorter.

Rancid: no matching key exchange method found. Their offer: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1

Time to move Rancid to a newer VM again, this time it’s Ubuntu 20. Hit a snag when I tried a test clogin run:

$ clogin myrouter
Unable to negotiate with 1.2.3.4 port 22: no matching key exchange method found.  Their offer: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1

OpenSSH removed SHA-1 from the defaults a while back, which makes sense since the migration to SHA-2 began several years ago. So looks like SSH is trying to use SHA-2 but the Cisco Router is defaulting to SHA-1, and something has to give in order for negotiation to succeed.

My first thought was to tell the Cisco router to use SHA-2, and this is possible for the MAC setting:

Router(config)#ip ssh server algorithm mac ?
  hmac-sha1      HMAC-SHA1 (digest length = key length = 160 bits)
  hmac-sha1-96   HMAC-SHA1-96 (digest length = 96 bits, key length = 160 bits)
  hmac-sha2-256  HMAC-SHA2-256 (digest length = 256 bits, key length = 256 bits)
  hmac-sha2-512  HMAC-SHA2-512 (digest length = 512 bits, key length = 512 bits

Router(config)#ip ssh server algorithm mac hmac-sha2-256 hmac-sha2-512
Router(config)#do sh ip ssh | inc MAC       
MAC Algorithms:hmac-sha2-256,hmac-sha2-512

But not for key exchange, which apparently only supports SHA-1:

Router(config)#ip ssh server algorithm kex ?
  diffie-hellman-group-exchange-sha1  DH_GRPX_SHA1 diffie-hellman key exchange algorithm
  diffie-hellman-group14-sha1         DH_GRP14_SHA1 diffie-hellman key exchange algorithm

Thus, the only option is to change the setting on the client. SSH has CLI options for Cipher and Mac:

-c : sets cipher (encryption) list.

-m: sets mac (authentication) list

One quick solution is tell the SSH client to support the Kex Exchange by adding this line to the /etc/ssh/ssh_config file:

KexAlgorithms +diffie-hellman-group14-sha1

But, I wanted to change the setting only for Rancid and not SSH in general, hoping that Cisco adds SHA-2 key exchange soon. I found out it is possible to set SSH options in the .cloginrc file. The solution is this:

add  sshcmd  *  {ssh\  -o\ KexAlgorithms=+diffie-
hellman-group14-sha1}

Clogin is now successful:

$ clogin myrouter
spawn ssh -oKexAlgorithms=+diffie-hellman-group14-sha1 -c aes128-ctr,aes128-cbc,3des-cbc -x -l myusername myrouter
Password:
Router#_

By the way, I stayed away from diffie-hellman-group-exchange-sha1 as it’s considered insecure, whereas diffie-hellman-group14-sha1 was considered deprecated but still widely deployed and still “strong enough”, probably thanks to its 2048-bit key length.

Sidenote: this only affects Cisco IOS-XE devices. The Cisco ASA ships with this in the default configuration:

ssh key-exchange group dh-group14-sha256

Getting Let’s Encrypt SSL Certificates on Linux and FreeBSD

First install certbot. This is basically the Python script that will read the web server configuration and make the request to the Let’s Encrypt API.

On Debian or Ubuntu:

sudo apt install certbot
sudo apt install python3-certbot-nginx
sudo apt install python3-certbot-apache

On FreeBSD:

sudo pkg install py37-certbot
sudo pkg install py37-certbot-nginx
sudo pkg install py37-certbot-apache
sudo pkg install py37-acme

Note that certbot can only match virtual hosts that listen on port 80.

Run this command for Nginx:

sudo certbot certonly --nginx

Or for Apache:

sudo certbot certonly --apache

Certificates will get saved in /etc/letsencrypt/live on Linux, or /usr/local/etc/letsencrypt/live on FreeBSD

In each sub-directory, there will be 4 files created:

  • privkey.pem = The private key
  • cert.pem = The SSL certificate
  • fullchain.pem = SSL cert + Intermediate Cert chain. This format is required by NGINX and some other web servers
  • chain.pem = Just the intermediate cert

Here’s a Python script that will create a list of all directories with Let’s Encrypt certs:

#!/usr/bin/env python3

import sys, os

if "linux" in sys.platform:
    src_dir = "/etc/letsencrypt/live"
if "freebsd" in sys.platform:
    src_dir = "/usr/local/etc/letsencrypt/live"

sites = [ f.name for f in os.scandir(src_dir) if f.is_dir() ]
for site in sites:
    if os.path.exists(src_dir + "/" + site + "/cert.pem"):
        print("Letsencrypt certificate exists for site:", site)

Giving read-only access on Cisco IOS-XE with RADIUS authentication

Had a simple but time-consuming problem today.  Our Cisco IOS-XE 16.12 routers authenticate to AD via RADIUS to Microsoft NPS, with certain AD group(s) having admin privileges.  On the router side, configuration looks like this, where 10.10.10.10 is the NPS server:

aaa group server radius MyRADIUS
 server-private 10.10.10.10 auth-port 1812 acct-port 1813 key 0 abcd1234
 ip vrf forwarding Mgmt-intf
!
aaa new-model
aaa session-id common
!
aaa authentication login default local group MyRADIUS
aaa authentication enable default none
aaa authorization config-commands
aaa authorization exec default local group MyRADIUS if-authenticated

In NPS, I have a policy to match the appropriate Windows Group with Authentication Type = PAP and NAS Port Type = Virtual.  In the Settings tab, I then have this Vendor Specific RADIUS Attribute:

Name: Cisco-AV-Pair
Vendor: Cisco
Value: priv-lvl=15

This allows users in this group to SSH to any router and immediately have privilege level 15, which gives them full admin access.

Now and I needed to give a certain AD group read-only access to view running-configuration.  So I create a new policy matching to that AD group, and in the RADIUS attributes, under Vendor Specific, I add this one:

Name: Cisco-AV-Pair
Vendor: Cisco
Value: priv-lvl=7

The test account could then SSH to the router and verify privilege level was 7:

Router#show priv
Current privilege level is 7

I then downgraded privileges on each router so that only level 3 was required to view running-config:

privilege exec level 3 show running-config view full
privilege exec level 3 show running-config view
privilege exec level 3 show running-config
privilege exec level 3 show

But, when doing “show running-config”, they would just get a nothing back in return.  As it turns out I needed one more step; lowering the privilege for viewing files on the router

file privilege 3

Now it works:

Router#show running-config view full
Building configuration...

Current configuration : 15124 bytes
!
! Last configuration change at 15:39:15 UTC Tue Mar 17 2020 by admin
! NVRAM config last updated at 15:39:21 UTC Tue Mar 17 2020 by admin
!
version 16.12
service timestamps debug datetime msec
service timestamps log datetime localtime show-timezone
service password-encryption
no service dhcp
service call-home

 

A very common mistake when parsing the HTTP X-Forward-For header

Let’s say you have a web server behind a load balancer that acts as a reverse proxy.  Since the load balancer is likely changing the source IP of the packets with its own IP address, it stamps the client’s IP address in the X-Forwarded-For header and then passes it along to the backend server.

Assuming the web server has been configured to log this header instead of client IP, a typical log entry will look like this:

198.51.100.111, 203.0.113.222 – – [10/Mar/2020:01:15:19 +0000] “GET / HTTP/1.1” 200 3041 “-” “Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36”

198.51.100.111 is the client’s IP address, and 203.0.113.222 is the Load Balancer’s IP address.   Pretty simple.  One would assume that it’s always the first entry that’s the client’s IP address, right?

Well no, because there’s an edge case.  Let’s say the client is behind a proxy server that’s already stamping X-Forward-For with the client’s internal IP address.  When the load balancer receives the HTTP request, it will often pass the X-Forwarded-For header unmodified to the web server, which then logs the request like this:

192.168.1.49, 198.51.100.111, 203.0.113.222 – – [10/Mar/2020:01:15:05 +0000] “GET /  HTTP/1.1” 200 5754 “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36”

192.168.1.49 is the client’s true internal IP, but we don’t care about that since it’s RFC-1918 and not of any practical use.  So it’s actually the second to last entry (not necessarily the first!!!) that contains the client’s public IP address and the one that should be used for any Geo-IP functions.

Here’s some sample Python code:

#!/usr/bin/env python

import os

x_fwd_for = os.environ.get('HTTP_X_FORWARDED_FOR', '')

if ", " in x_fwd_for:
    client_ip = x_fwd_for.split(", ")[-2]
else:
    client_ip = os.environ.get('REMOTE_ADDR', '127.0.0.1')

If behind NGinx, a better solution is to prefer the X-Real-IP header instead:

import os

x_real_ip = os.environ.get('HTTP_X_REAL_IP', '')
x_fwd_for = os.environ.get('HTTP_X_FORWARDED_FOR', '')

if x_real_ip:
    client_ip = x_real_ip
elif x_fwd_for and ", " in x_fwd_for:
    client_ip = x_fwd_for.split(", ")[-2]
else:
    client_ip = os.environ.get('REMOTE_ADDR', '127.0.0.1')

Other platforms can easily be configured to stamp an X-Real-IP header as well.  For example, on an F5 BigIP LTM load balancer, this iRule will do the job:

when HTTP_REQUEST {
    HTTP::header insert X-Real-IP [IP::remote_addr]
}

Deploying CheckPoint CloudGuard IaaS High Availability in GCP

A minimum 3 NICs are required and will be broken down like so:

  • eth0 – Public / External Interface facing Internet
  • eth1 – Management interface used for Cluster sync.  Can also be used for security management server communication
  • eth2 – First internal interface.  Usually faces internal servers & load balancers.  Can be used for security management server communication

The Deployment launch template has a few fields which aren’t explained very well…

Security Management Server address

A static route to this destination via management interface will be created a launch time.  If the Security Management server is accessed via one of the internal interfaces, use a dummy address here such as 1.2.3.4/32 and add the static routes after launch.

SIC key

This is the password to communicate with the Security Management server. It can be set after launch, but if already known, it can be set here to be pre-configured at launch

Automatically generate an administrator password

This will create a new random ‘admin’ user password to allow access to the WebGUI right after launch, which saves some time especially in situations were SSH is slow or blocked.

Note – SSH connections always require public key authentication, even with this enabled

Allow download from/upload to Check Point

This will allow the instance to communicate outbound to Checkpoint to check for updates.  It’s enabled by default on most CheckPoint appliances, so I’d recommend enabling this setting

Networking

This is the real catch, and a pretty stupid one.  The form fills out these three subnets:

  • “Cluster External Subnet CIDR” = 10.0.0.0/24
  • “Management external subnet CIDR” = 10.0.1.0/24
  • “1st internal subnet CIDR” = 10.0.2.0/24

If using an existing network, erase the pre-filled value and then select the appropriate networks in the drop-down menus like so:

GCP_Existing_VPCNetworks

Also, make sure all subnets have “Private Google Access” checked

Post-launch Configuration

After launch, access the gateways via SSH using public key and/or WebGUI to run through initial setup.  The first step is set a new password for the admin user:

set user admin password

set expert-password

Since eth1 rather than eth0 is the management interface, I would recommend setting that accordingly:

set management interface eth1

I would also recommend adding static routes. The deployment will create static routes for RFC 1918 space via the management interface.  If these need to be overridden to go via an internal interface the CLI command is something like this

set static-route NETWORK/MASK nexthop gateway address NEXTHOP_ADDRESS on

Before importing in to SmartConsole, you can test connectivity by trying to telnet to the security management’s server address on port 18191. Once everything looks good, don’t forget to save the configuration:

save config

Cluster Creation

In SmartConsole, create a new ClusterXL. When prompted for the cluster address, enter the primary cluster address.  The easy way to find this is look the the deployment result under Tools -> Deployment manager -> Deployments

CheckPoint_Deployment_ClusterIPExternalAddress

Then add the individual gateways with the management interface.   Walking through the wizard, you’ll need to define the type of each interface:

  • Set the first (external) interface to private use
  • Set the secondary (management) interface as sync/primary
  • Set subsequent interfaces as private use with monitoring.

Note the wizard tends to list the interfaces backwards: eth2, eth1, eth0

GCP_Clustering

The guide lists a few steps to do within the Gateway Cluster Properties, several of which I disagree with. Instead, I’d suggest the following:

  • Under Network Management, VPN Domain, create a group that lists the internal subnets behind the Checkpoint that will be accessed via site-to-site and remote access VPNs
  • On the eth1 interface, set Topology to Override / This Network / Network defined by routes. This should allow IP spoofing to remain enabled
  • Under NAT, do not check “Hide internal networks behind the Gateway’s external IP” as this will auto-generate a NAT rule that could conflict with site-to-site VPNs. Instead, create manual NAT rules in the policy.
  • Under IPSec VPN, Link Selection, Source IP address Settings, set Manual / IP address of chosen interface

Do a policy install on the new cluster, and a few minutes later, the GCP console should map the primary and secondary external IP addresses to the two instances

CheckPoint_GCP_External_IPAddresses

Failover

Failover is done via API call and takes roughly 15 seconds.

On the external network (front end), the primary and secondary firewalls will each get external IP address mapped.  CheckPoint calls these “primary-cluster-address” and “secondary-cluster-address”.  I’d argue “active” and “standby” would be better names, because the addresses will flip during a failover event.

On the internal network (back end0, failover is done by modifying the static route to 0.0.0.0/0.  The entries will be created on the internal networks when the cluster is formed.

Known Problems

The script $FWDIR/scripts/gcp_ha_test.py is missing

This is simply a mistake in CheckPoint’s documentation.  The correct file name is:

$FWDIR/scripts/google_ha_test.py

Deployment Fails with error code 504, Resource Error, Timeout expired

DeployFailure

Also, while the instances get created and External static IPs allocated, the secondary cluster IP never gets mapped and failover does not work.

Cause: there is a portion of the R80.30 deployment script relating to external IP address mapping that assumes the default service account is enabled, but many Enterprise customers will have default service account disabled as a security best practice.  As of January 2020, the only fix is to enable the default service account, then redo the deployment.

StackDriver is enabled at launch, but never gets logs

Same issue as a above.  As of January 2020, it depends on the default service account being enabled.

Setting Linux clients to use a proxy server

Assuming proxy server is 192.168.1.100, port 3128…

Most user-land applications, such as Curl

These use the http_proxy and https_proxy environment variables.  To set these on BASH:

export http_proxy=http://192.168.1.100:3128
export https_proxy=http://192.168.1.100:3128

For wget:

edit /etc/wgetrc and uncomment out the these lines:

https_proxy = http://192.168.1.100:3128/
http_proxy = http://192.168.1.100:3128/


For Git:

git config --global http.proxy http://192.168.1.100:3128

or
printf "[http]\n\tproxy = http://192.168.1.100:3128\n" >> ~/.gitconfig

 

Package installations/updates in Debian & Ubuntu:

Create the file /etc/apt/apt.conf.d/99http-proxy with this line:

Acquire::http::Proxy "http://192.168.1.100:3128";

Package installations/updates in RHEL & CentOS

Add this line to /etc/yum.conf under the [main] section:

proxy=http://192.168.1.100:3128

PIP on the fly


sudo pip install --proxy=http://192.168.1.100:3128 somepackage

To install a squid proxy server:

Debian & Ubuntu

sudo apt-get install squid
/etc/init.d/squid stop
/etc/init.d/squid start

RHEL & CentOS

sudo yum install squid
systemctl stop squid.service
systemctl start squid.service

In both cases the configuration file is /etc/squid/squid.conf

I’d recommend setting these for better performance and improved stability:

# Allocate 2 GB of disk space to disk caching
cache_dir ufs /var/spool/squid 2048 16 256
# Cache files smaller than MB in size, up from default of 4 MB
maximum_object_size 256 MB
# Up max file descriptors from default of 1024 to 4096
max_filedesc 4096

SSHing to an older Cisco ASA from a new Mac

Newer SSH clients such as on MacOS 10.14 (Mojave) may not want to use the old key sizes and cipher suites on an ASA.

One error message is about key exchange parameters:

no matching key exchange method found. Their offer: diffie-hellman-group1-sha1

Can fix this by using the older key exchange algorithm as an command line option:

ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 username@myasa.com

This can then be fixed server-side on the ASA by configuring Group 14 (2048-bit keys)

ASA(config)# ssh key-exchange group ?
configure mode commands/options:
  dh-group1-sha1   Diffie-Hellman group 2
  dh-group14-sha1  Diffie-Hellman group 14
ASA(config)# ssh key-exchange group dh-group14-sha1

Likewise may get messages about cipher suites not matching:

no matching cipher found. Their offer: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc

Workaround is to specify ciphers as an option to SSH:

ssh -c aes128-cbc,3des-cbc username@myasa.com