Anonymous FTP security bug on Synology

adf

% ftp ds218plus
Connected to ds218plus.mydomain.com.
220 DS218Plus FTP server ready.
Name (ds218plus:j5): ftp
331 Guest login ok, send your email address as password.
Password: 
230 Guest login ok, access restrictions apply.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
229 Entering Extended Passive Mode (|||55703|)
150 Opening BINARY mode data connection for 'file list'.
drwxrwxrwx   1 root     root             4096 Sep 16 10:58 usbshare1
dr-xr-xr-x   1 root     root              142 May  9 14:48 web
drwxr-xr-x   1 root     root               38 Aug 18 22:21 docker

Whoa there! I should only see the directory for Anonymous FTP, not a list of shares. What’s more, I could download files from these directories even though they were never intended to be public.

The first step is disable advanced permissions on whichever directory you want to use for Anonymous FTP. In my case, the share was called ‘public’:

After doing that, I can manually select the share to use for Anonymous FTP:

Advertisement

Clearing out /var/spool/clientmqueue in FreeBSD

My FreeBSD VM with its 10GB virtual hard disk ran out of space today. The primary culprit was /var/spool/clientmqueue consuming nearly 3GB of space:

# du -d 1 /var/spool/
8	/var/spool/output
4	/var/spool/opielocks
2955904	/var/spool/clientmqueue
4	/var/spool/dma
4	/var/spool/lpd
4	/var/spool/lock
4	/var/spool/mqueue
2955936	/var/spool/

But when trying to just delete the files, I got “argument list too long”:

# rm -f /var/spool/clientmqueue/*
/bin/rm: Argument list too long.

In the Google search I learned something interesting: find has a -delete option. This worked well:

# find /var/spool/clientmqueue -name '*' -delete

VPNs to Google Cloud Platform (GCP) when FortiGate is behind a NAT gateway

Ran in to problems getting a VPN up and running between GCP and a FortiGate 60-E that was behind a NAT gateway (with ports udp/500 + udp/4500 forwarded). On the GCP side, these messages would show up in the logs:

remote host is behind NAT
generating IKE_AUTH response 1 [ N(AUTH_FAILED) ]
looking for peer configs matching GCP.VPN.GATEWAY.IP[%any]...203.0.113.77[192.168.1.1]

This error means that GCP connected to the Peer VPN gateway successfully, but it in the IKEv2 headers, it identified itself by the private IP rather than the expected public one. AWS is not picky about this, but with GCP, the Peer VPN gateway must identify itself by using the same external IP address of the NAT device.

Most vendors have long supported an option to manually override the IP address for such scenarios. In Cisco IOS or IOS-XE, this can be controlled in the IKEv2 profile with the identity local address option:

crypto ikev2 profile GCP_IKEV2_PROFILE
 match address local interface GigabitEthernet1
 identity local address MY.PUBLIC.IP.ADDRESS
 authentication remote pre-share
 authentication local pre-share
 keyring local GCP_KEYRING
 lifetime 36000
 dpd 20 5 on-demand
!

With Palo Alto, this is configured in the IKE Gateway, Local Identification field:

For the sake of argument, we’ll say that CheckPoint uses the “Statically NATed IP” field to influence Local ID, although this doesn’t actually work.

Fortigate does offer “Local ID” field in version 6.4.6 and higher, under the Phase 1 proposal:

Seems nice and straightforward, but even after changing this setting, the VPN tunnel still won’t establish. Logs on the GCP end change slightly and now show this:

looking for peer configs matching GCP.VPN.GATEWAY.IP[%any]...203.0.113.77[203.0.113.77]

The private IP is no longer showing, so it seems the issue should be solved. Instead, GCP reports a “Peer not responding” message. The Fortigate actually reports Phase 1 success, waits a few seconds, and then starts the negotiation all over. So not very helpful.

I configured a test VPN between the FortiGate and a Palo Alto, which then gave a very specific and extremely useful error message:

IKE phase-1 negotiation is failed. When pre-shared key is used, peer-ID must be type IP address. Received type FQDN

Now this explains the problem! Even though the FortiGate is sending the correct IP address in the IKEv2 header, it’s being sent as the wrong identity type. The 5 identity types are listed in RFC 7815:

  • ID_IPV4_ADDR = 32 bit IPv4 address
  • ID_IPV6_ADDR = 128 bit IPv6 address
  • ID_FQDN = DNS hostname
  • ID_RFC822_ADDR = e-mail address
  • ID_KEY_ID = octet stream

If Fortigate were smart, it would either default to IPv4 address type or auto-determine this based on the text inputted in to the field. But it seems to simply default to FQDN. Oddly, there is a CLI option called “localid-type” under the Phase1-interface that clear is intended to provide this functionality:

FGT60E1234567890 # config vpn ipsec phase1-interface

FGT60E1234567890 (phase1-interface) # edit gcp

FGT60E1234567890 (gcp) # set localid-type 
auto         Select ID type automatically.
fqdn         Use fully qualified domain name.
user-fqdn    Use user fully qualified domain name.
keyid        Use key-id string.
address      Use local IP address.

But, similar to CheckPoint, it just doesn’t work, and can be considered a broken feature.

Since GCP does not support FQDN authentication, VPNs between GCP and FortiGates behind a NAT are not possible at this time.

Rancid: no matching key exchange method found. Their offer: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1

Time to move Rancid to a newer VM again, this time it’s Ubuntu 20. Hit a snag when I tried a test clogin run:

$ clogin myrouter
Unable to negotiate with 1.2.3.4 port 22: no matching key exchange method found.  Their offer: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1

OpenSSH removed SHA-1 from the defaults a while back, which makes sense since the migration to SHA-2 began several years ago. So looks like SSH is trying to use SHA-2 but the Cisco Router is defaulting to SHA-1, and something has to give in order for negotiation to succeed.

My first thought was to tell the Cisco router to use SHA-2, and this is possible for the MAC setting:

Router(config)#ip ssh server algorithm mac ?
  hmac-sha1      HMAC-SHA1 (digest length = key length = 160 bits)
  hmac-sha1-96   HMAC-SHA1-96 (digest length = 96 bits, key length = 160 bits)
  hmac-sha2-256  HMAC-SHA2-256 (digest length = 256 bits, key length = 256 bits)
  hmac-sha2-512  HMAC-SHA2-512 (digest length = 512 bits, key length = 512 bits

Router(config)#ip ssh server algorithm mac hmac-sha2-256 hmac-sha2-512
Router(config)#do sh ip ssh | inc MAC       
MAC Algorithms:hmac-sha2-256,hmac-sha2-512

But not for key exchange, which apparently only supports SHA-1:

Router(config)#ip ssh server algorithm kex ?
  diffie-hellman-group-exchange-sha1  DH_GRPX_SHA1 diffie-hellman key exchange algorithm
  diffie-hellman-group14-sha1         DH_GRP14_SHA1 diffie-hellman key exchange algorithm

Thus, the only option is to change the setting on the client. SSH has CLI options for Cipher and Mac:

-c : sets cipher (encryption) list.

-m: sets mac (authentication) list

But the option for Key Exchange can only be configured via the /etc/ssh/sshd_config file with this line:

KexAlgorithms +diffie-hellman-group14-sha1

I wanted to change the setting only for Rancid and not SSH in general, hoping that Cisco adds SHA-2 key exchange soon. I found out it is possible to set SSH options in the .cloginrc file. The solution is this:

add  sshcmd  *  {ssh\  -o\ KexAlgorithms=+diffie-
hellman-group14-sha1}

Clogin is now successful:

$ clogin myrouter
spawn ssh -oKexAlgorithms=+diffie-hellman-group14-sha1 -c aes128-ctr,aes128-cbc,3des-cbc -x -l myusername myrouter
Password:
Router#_

By the way, I stayed away from diffie-hellman-group-exchange-sha1 as it’s considered insecure, whereas diffie-hellman-group14-sha1 was considered deprecated but still widely deployed and still “strong enough”, probably thanks to its 2048-bit key length.

Sidenote: this only affects Cisco IOS-XE devices. The Cisco ASA ships with this in the default configuration:

ssh key-exchange group dh-group14-sha256

Docker Cheat Sheet

Install Docker on Ubuntu

sudo apt update
 
sudo apt install apt-transport-https ca-certificates curl software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-cache policy docker-ce
apt install docker-ce

List images

docker image list

Run a container from an image

docker run --name <RUN_NAME> -p <HOST_PORT>:<CONTAINER_PORT> <IMAGE_NAME>:latest

List running containers

docker ps

Build a container

docker build -t <IMAGE_NAME> <DIR_OF_Dockerfile>

Upload an image to Docker registry

docker push <IMAGE_NAME>

Save (export) an image

docker save <IMAGE_NAME>:latest -o image.tar

Save (export) an image with real-time gziping

docker save <IMAGE_NAME>:latest | gzip > image.tgz

Fix unsigned repo errors for ubuntu and debian

docker system prune

Delete all local images

docker rmi -f $(docker images -aq)

Getting Let’s Encrypt SSL Certificates on Linux and FreeBSD

First install certbot. This is basically the Python script that will read the web server configuration and make the request to the Let’s Encrypt API.

On Debian or Ubuntu:

sudo apt install certbot
sudo apt install python3-certbot-nginx
sudo apt install python3-certbot-apache

On FreeBSD:

sudo pkg install py37-certbot
sudo pkg install py37-certbot-nginx
sudo pkg install py37-certbot-apache
sudo pkg install py37-acme

Note that certbot can only match virtual hosts that listen on port 80.

Run this command for Nginx:

sudo certbot certonly --nginx

Or for Apache:

sudo certbot certonly --apache

Certificates will get saved in /etc/letsencrypt/live on Linux, or /usr/local/etc/letsencrypt/live on FreeBSD

In each sub-directory, there will be 4 files created:

  • privkey.pem = The private key
  • cert.pem = The SSL certificate
  • fullchain.pem = SSL cert + Intermediate Cert chain. This format is required by NGINX and some other web servers
  • chain.pem = Just the intermediate cert

Here’s a Python script that will create a list of all directories with Let’s Encrypt certs:

#!/usr/bin/env python3

import sys, os

if "linux" in sys.platform:
    src_dir = "/etc/letsencrypt/live"
if "freebsd" in sys.platform:
    src_dir = "/usr/local/etc/letsencrypt/live"

sites = [ f.name for f in os.scandir(src_dir) if f.is_dir() ]
for site in sites:
    if os.path.exists(src_dir + "/" + site + "/cert.pem"):
        print("Letsencrypt certificate exists for site:", site)

Enabling Private Google Access in Google Cloud Platform

By default, calls to the various Google Cloud APIs will resolve to a random Google-owned IP, and require outbound internet access, either via external IP, Cloud NAT, or 3rd party network appliance.

If outbound Internet is not required for the application, or not desired for security reasons, enabling Private Google Access allows VM instances to connect an internally routed prefix.

  1. On the subnet, turn on Private Google Access via the radio button
  2. By default, all egress traffic is permitted. If egress traffic is being denied deliberately, create a rule allowing egress traffic to destination 199.36.153.8/30, tcp ports 80 and 443
  3. Create a Private DNS zone called googleapis.com, and apply it to any networks that will use Google Private Access.

In the DNS zone, create two entries:

An A record called ‘private’ that resolves to the following 4 IP addresses:

  • 199.36.153.8
  • 199.36.153.9
  • 199.36.153.10
  • 199.36.153.11

A wildcard record ‘*’ that points to hostname ‘private’

The zone will look like this once created.

Private Google Access should now be working. To test it, ping something.googleapis.com and it should resolve to one of those 4 IP addresses

ping www.googleapis.com
PING private.googleapis.com (199.36.153.9) 56(84) bytes of data.

Getting web server variables and query parameters in different Python Frameworks

As I explore different ways of doing Web programming in Python via different Frameworks, I kept finding the need to examine HTTP server variables, specifically the server hostname, path, and query string. The method to do this varies quite a bit by framework.

Given the following URL: http://www.mydomain.com:8000/derp/test.py?name=Harry&occupation=Hit%20Man

I want to create the following variables with the following values:

  • server_host is ‘www.mydomain.com’
  • server_port is 8000
  • path is ‘/derp/test.py’
  • query_params is this dictionary: {‘name’: ‘Harry’, ‘occupation’: ‘Hit Man’}

Old School CGI

cgi.FieldStorage() is the easy way to do this, but it returns a list of tuples, and must be converted to a dictionary.

#!/usr/bin/env python3

if __name__ == "__main__":

    import os, cgi

    server_host = os.environ.get('HTTP_HOST', 'localhost')
    server_port = os.environ.get('SERVER_PORT', 80)
    path = os.environ.get('SCRIPT_URL', '/')
    query_params = {}
    _ = cgi.FieldStorage()
    for key in _:
        query_params[key] = str(_[key].value)

Note this will convert all values to strings. By default, cgi.FieldStorage() create numberic values as int or float.

WSGI

Similar to CGI, but environment variables get passed simply in a dictionary as the first parameter. There is no need to load the os module.

def application(environ, start_response):

    from urllib import parse

    server_host = environ.get('HTTP_HOST', 'localhost')
    server_port = environ.get('SERVER_PORT', 80)
    path = environ.get('SCRIPT_URL', '/')
    query_params = {}
    if '?' in environ.get('REQUEST_URI', '/'):
        query_params = dict(parse.parse_qsl(parse.urlsplit(environ['REQUEST_URI']).query))

Since the CGI Headers don’t exist, urllib.parse can be used to analyze the REQUEST_URI environment variable in order to form the dictionary.

Flask

Flask makes this very easy. The only real trick comes with path; the ‘/’ gets removed, so it must be re-added

from flask import Flask, request

app = Flask(__name__)

# Route all possible paths here
@app.route("/", defaults={"path": ""})
@app.route('/<string:path>')
@app.route("/<path:path>")

def index(path):
      
    [server_host, server_port] = request.host.split(':')
    path =  "/" + path
    query_params = request.args
 

FastAPI

This one’s a slightly different because the main variable to examine actually a QueryParams object with is a form of MultiDict

from fastapi import FastAPI, Request

app = FastAPI()

# Route all possible paths here
@app.get("/")
@app.get("/{path:path}")
def root(path, req: Request):

    [server_host, server_port] = req.headers['host'].split(':')
    path = "/" + path
    query_params = dict(req.query_params)

AWS Lambda

Lambda presents a dictionary called ‘event’ to the handler and it’s simply a matter of grabbing the right keys:

def lambda_handler(event, context):

    server_host = event['headers']['host']
    server_port = event['headers']['X-Forwarded-Port']
    path = event['path']
    query_params = event['queryStringParameters']

If multiValueheaders are enabled, some of the variables come in as lists, which in turn may have a list as values, even if there’s only one item.

    server_host = event['multiValueHeaders']['host'][0]
    query_params = {}
    for _ in event["multiValueQueryStringParameters"].items():
        query_params[_[0]] = _[1][0]

Getting started with Flask and deploying Python apps to Google App Engine

Installing Flask on Linux or Mac

On Debian 10 or Ubuntu 20:

sudo pip3 install flask flask-cors

On Mac or FreeBSD:

sudo pip install flask flask-cors

Creating a basic flask app:

#!/usr/bin/env python3

from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route("/", defaults = {'path': ""})
@app.route("/<string:path>")
@app.route("/<path:path>")

def index(path):
    req_info = {
        'host': request.host.split(':')[0],
        'path': "/" + path,
        'query_string': request.args,
        'remote_addr': request.environ.get('HTTP_X_REAL_IP', request.remote_addr),
        'user_agent': request.user_agent.string
    }
    return jsonify(req_info)

if __name__ == '__main__':
    app.run()

Run the app

chmod u+x main.py
./main.py
Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)

Do a test curl against it

$ curl -v "http://localhost:5000/oh/snap?x=1&x=2"

< HTTP/1.0 200 OK
< Content-Type: application/json
< Content-Length: 65
< Access-Control-Allow-Origin: *
< Server: Werkzeug/1.0.1 Python/3.7.8
< Date: Wed, 21 Apr 2021 17:07:58 GMT
<
{"host":"localhost","path":"/oh/snap","query_string":{"x":"1"},"remote_addr":"127.0.0.1","user_agent":"curl/7.72.0"}

Deploying to Google Cloud App Engine

Create a requirements.txt file:

echo "flask" > requirements.txt

Create an app.yaml file:

printf "runtime: python38\nenv: standard\n" > app.yaml 

Now deploy the app to Google using the gCloud command:

gcloud app deploy