Install Terraform on Debian 10 (Buster) when a proxy is required

# Setup proxy, if required
sudo bash -c 'echo "Acquire::http::Proxy \"http://10.0.0.9:3128\";" > /etc/apt/apt.conf.d/99http-proxy'

# Set environment variables to be used by Curl
export http_proxy=http://10.0.0.9:3128
export https_proxy=http://10.0.0.9:3128

Now install Terraform

curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -

sudo apt-get install software-properties-common

sudo apt-add-repository "deb [arch=$(dpkg --print-architecture)] https://apt.releases.hashicorp.com $(lsb_release -cs) main"

sudo apt update
sudo apt upgrade
sudo apt install terraform 

Migrating to MaxMind GeoIP2 for Python3

With Python2 now EOL, one of my tasks was to replace old python2/geolite2 code with python3/geoip. This does require a subscription to MaxMind to either make the calls via web or download static database files, which fortunately was a option.

Installing Python3 GeoIP2 package

On Ubuntu 20:

  • apt install python3-pip
  • pip3 install geoip2

On FreeBSD 11.4:

  • pkg install python3
  • pkg install py37-pip
  • pip install geoip2

Verify successful install

% python3
Python 3.7.8 (default, Aug  8 2020, 01:18:05) 
[Clang 8.0.0 (tags/RELEASE_800/final 356365)] on freebsd11
Type "help", "copyright", "credits" or "license" for more information.
>>> import geoip2.database
>>> help(geoip2.database.Reader) 
Help on class Reader in module geoip2.database:

Sample Python Script

#!/usr/bin/env python3

import sys
import geoip2.database

ipv4_address = input("Enter an IPv4 address: ")

with geoip2.database.Reader('/var/db/GeoIP2-City.mmdb') as reader:
    try:
        response = reader.city(ipv4_address)
    except:
        sys.exit("No info for address: " + ipv4_address)
    if response:
        lat = response.location.latitude
        lng = response.location.longitude
        city = response.city.name
        print("lat: {}, lng: {}, city: {}".format(lat, lng, city))

Ubuntu 20.04: “Unable to locate package python-pip”

I started the upgrade from Ubuntu 18.04 to 20.04 today and got a bit of a surprise when trying to install PIP so I could use the MaxMind geolite2 package:

root@ubuntu-rr58:/home/me# apt install python-pip
Reading package lists... Done
Building dependency tree       
Reading state information... Done
E: Unable to locate package python-pip

The root problem here is Python 2 went EoS in January 2020 and does not ship with Ubuntu 20.  But, there is a hack to load certain Python 2 packages…

First, install python3-pip:

apt install python3-pip

Then try to install the python2 packages you’re looking for:

pip3 install python-geoip
pip3 install python-geoip-geolite2

Now, install Python 2.7:

sudo apt install python2

In your script, use sys.path.insert to add the Python3 packages directory.

#!/usr/bin/env python2

from __future__ import print_function
import sys
sys.path.insert(1, '/usr/local/lib/python3.8/dist-packages/')
from geoip import geolite2

 

A better solution for this particular issue was migrate from geoip-geolite2 to , which is fully python3.

Working with CGP Storage via Linux/FreeBSD CLI

Installing Google Cloud SDK on FreeBSD:

This is easily done via package or ports:

pkg install google-cloud-sdk

You may also wish to install the Python modules:

pkg install py39-google-api-python-client
pkg install py39-google-cloud-storage

Installing Google Cloud SDK on Debian/Ubuntu:

Follow the instructions here which are summarized below

Add the Google Cloud SDK as a package source:

echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list

Install required dependencies:

sudo apt install apt-transport-https ca-certificates gnupg

Add Google Cloud public key:

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
 

Install Google Cloud SDK:

sudo apt update
sudo apt install google-cloud-sdk

Prepping gCloud:

If a proxy server is required, set gcloud to use it:

gcloud config set proxy/type http
gcloud config set proxy/address 10.10.10.100
gcloud config set proxy/port 3128

Configure gCloud.  This will spit out a URL to paste in to browser, which will return an authorization code

gcloud init

This will generate an encrypted file ~/.gsutil/credstor that will be used for authentication.  To re-authenticate:

gcloud auth login

To switch to a different project:

glcoud config set project <PROJECT_ID>

To switch to a different account:

gcloud config set account

To use a service account:

gcloud auth activate-service-account <ACCOUNT_EMAIL> --key-file=<JSON_KEY_FILE>

CLI commands for working with Google Cloud Storage

List existing buckets

gsutil ls

Create a storage bucket called ‘mybucket’

gsutil mb gs://mybucket

Get information about a bucket called ‘mybucket’

gsutil ls -L -b gs://mybucket/

Upload a single file to the bucket

gsutil cp myfile gs://mybucket/

Upload a directory and its contents to a bucket

gsutil cp -r folder1 gs://code-j5-org/

List contents of a bucket

gsutil ls -r gs://mybucket/

Download a file called ‘testfile.png’ in ‘folder1’

gsutil cp gs://mybucket/folder1/testfile.png

Delete multiple files in a folder

gsutil rm gs://mybucket/folder1/*.png

Delete a folder and all its contents

gsutil rm -r gs://mybucket/folder1

Delete a bucket, if bucket is empty

gsutil rb gs://mybucket

Delete a bucket all all files

gsutil rm -r gs://mybucket

Accessing buckets via HTTPS

asdflkj

curl -X POST --data-binary @[OBJECT_LOCATION] \
-H "Authorization: Bearer [OAUTH2_TOKEN]" \
-H "Content-Type: [OBJECT_CONTENT_TYPE]" \
"https://storage.googleapis.com/upload/storage/v1/b/[BUCKET_NAME]/o?uploadType=media&name=[OBJECT_NAME]"

To download files, buckets can be accessed at https://<bucket name>.storage.googleapis.com/path   For example,

curl https://mybucket.storage.googleapis.com/folder1/testfile.png

Within GCP for subnets that have “Private google access”, this DNS name will always resolve to 199.36.153.8-11

Using AWS S3 Storage from Linux CLI

Start by installing aws-shell, then run the configure command to enter key and region information:

sudo apt install aws-shell
aws configure

To list files in a bucket called ‘mybucket’:

aws s3 ls s3://mybucket

To upload a single file:

aws s3 cp /tmp/myfile.txt s3://mybucket/

To upload all files in a directory with a certain extension:

aws s3 cp /tmp/ s3://mybucket/ --recursive --exclude '*' --include '*.txt'

To recursively upload contents of a directory:

aws s3 cp /tmp/mydir/ s3://mybucket/ --recursive

To delete a single file:

aws s3 rm s3://mybucket/myfile.text

To empty a bucket (delete all files, but keep bucket):

aws s3 rm s3://mybucket --recursive

 

Adding a swap file to a t2.nano in AWS running Ubuntu 18

I recently moved my Cacti/Rancid/Ansible Linux VM to a t2.nano in AWS. With only 500 MB of RAM, I knew there would be some performance limitations, but  what I didn’t realize is by default, the instance had no swap configured.  A MariaDB server consumes ~200 MB of memory when running, and sure enough, mysqld died after a few days uptime:

Apr 20 15:42:20 nettools kernel: [351649.590161] Out of memory: Kill process 27535 (mysqld) score 491 or sacrifice child
Apr 20 15:42:20 nettools kernel: [351649.598181] Killed process 27535 (mysqld) total-vm:1168184kB, anon-rss:240496kB, file-rss:0kB, shmem-rss:0kB
Apr 20 15:42:20 nettools kernel: [351649.676624] oom_reaper: reaped process 27535 (mysqld), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

So I wanted to add a 1GB swap file so that any memory-heavy processes would be happy and stable.  It was easy enough to find a blog post that outlined creating the swapfile:

# Become root
sudo su

# Create an empty 1 GB (1MB x 1024) file called /swap.img
dd if=/dev/zero of=/swap.img bs=1M count=1024

# Set recommended permissions
chmod 600 /swap.img

# Convert it to usable swap
mkswap /swap.img

Many of these posts were neglecting how to make the swap activated automatically at boot time.  To do so, add this line to bottom of /etc/fstab

/swap.img swap swap defaults 0 0

The swap file can be activated immediately with this command:

swapon -a

Or, give it a test reboot and verify it’s being activated automatically at startup:

ubuntu@linux:~$ top
top - 16:55:04 up 18 min,  1 user,  load average: 0.00, 0.01, 0.03
Tasks: 108 total,   1 running,  71 sleeping,   0 stopped,   0 zombie
%Cpu(s): 10.3 us,  6.0 sy,  0.0 ni, 80.8 id,  3.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :   491200 total,    21040 free,   329668 used,   140492 buff/cache
KiB Swap:  1048572 total,   992764 free,    55808 used.   140596 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND                                                                                                    
  905 mysql     20   0 1166460 160616  10508 S  4.7 32.7   0:04.90 mysqld                                                                                                     
 1781 www-data  20   0  283268  29696  18944 S  1.7  6.0   0:00.05 php                                                                                                        
 1785 www-data  20   0  289512  30488  18688 S  1.7  6.2   0:00.05 php                                                                                                        
   35 root      20   0       0      0      0 S  1.0  0.0   0:00.45 kswapd0                                                                                                    
  967 www-data  20   0  481904  22936  18432 S  0.3  4.7   0:00.16 apache2                                                                                                    
    1 root      20   0  225264   8408   6792 S  0.0  1.7   0:02.56 systemd

Enabling SSL (HTTPS) on Apache 2.4 Ubuntu, with a good rating from SSL Labs as a bonus

Start by installing Apache 2.4.  This will run on port 80 out of the box:

sudo su
apt install apache2
apt install apache2-doc

To use SSL/TLS/HTTPS aka port 443 as well, follow these additional steps:

Activate the SSL, socache_shmcb, and rewrite modules:

cd /etc/apache2/mods-enabled/
ln -s ../mods-available/ssl.load .
ln -s ../mods-available/socache_shmcb.load .

Optionally, activate the headers, rewrite and proxy modules, as they are often useful:

ln -s ../mods-available/headers.load .
ln -s ../mods-available/rewrite.load .
ln -s ../mods-available/cgi.load .

Copy the default ssl.conf file over and edit it:

cp /etc/apache2/mods-available/ssl.conf /etc/apache2/conf-enabled/
nano /etc/apache2/conf-enabled/ssl.conf

Near the bottom, modify these lines so that the AES-GCM protocols are preferred and only TLS 1.2 is supported

   #SSLCipherSuite HIGH:!aNULL   
   SSLCipherSuite EECDH+AESGCM:DHE+AESGCM:ECDHE+AES+SHA:RSA+AES+SHA
   SSLHonorCipherOrder on
   SSLProtocol TLSv1.2

Then edit /etc/apache2/sites-enabled/000-default.conf so it has default virtual hosts on both port 80 and port 443:

<VirtualHost _default_:80>
   ServerName localhost
   ServerAdmin webmaster@localhost
   DocumentRoot /var/www/html
   ErrorLog ${APACHE_LOG_DIR}/error.log
   CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
<VirtualHost _default_:443>
   ServerName localhost
   ServerAdmin webmaster@localhost
   DocumentRoot /var/www/html
   ErrorLog ${APACHE_LOG_DIR}/error.log
   CustomLog ${APACHE_LOG_DIR}/access.log combined
   SSLEngine On
   SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
   SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
</VirtualHost>

Restart Apache and you should now have service for both HTTP (port 80) and HTTPS (port 443)

apachectl configtest
Syntax OK
apachectl restart

Run the site through SSL labs and the rating should be high, other than the self-signed certificate.

Setting Linux clients to use a proxy server

Assuming proxy server is 192.168.1.100, port 3128…

Most user-land applications, such as Curl

These use the http_proxy and https_proxy environment variables.  To set these on BASH:

export http_proxy=http://192.168.1.100:3128
export https_proxy=http://192.168.1.100:3128

For wget:

edit /etc/wgetrc and uncomment out the these lines:

https_proxy = http://192.168.1.100:3128/
http_proxy = http://192.168.1.100:3128/


For Git:

git config --global http.proxy http://192.168.1.100:3128

or
printf "[http]\n\tproxy = http://192.168.1.100:3128\n" >> ~/.gitconfig

 

Package installations/updates in Debian & Ubuntu:

Create the file /etc/apt/apt.conf.d/99http-proxy with this line:

Acquire::http::Proxy "http://192.168.1.100:3128";

Package installations/updates in RHEL & CentOS

Add this line to /etc/yum.conf under the [main] section:

proxy=http://192.168.1.100:3128

PIP on the fly


sudo pip install --proxy=http://192.168.1.100:3128 somepackage

To install a squid proxy server:

Debian & Ubuntu

sudo apt-get install squid
/etc/init.d/squid stop
/etc/init.d/squid start

RHEL & CentOS

sudo yum install squid
systemctl stop squid.service
systemctl start squid.service

In both cases the configuration file is /etc/squid/squid.conf

I’d recommend setting these for better performance and improved stability:

# Allocate 2 GB of disk space to disk caching
cache_dir ufs /var/spool/squid 2048 16 256
# Cache files smaller than MB in size, up from default of 4 MB
maximum_object_size 256 MB
# Up max file descriptors from default of 1024 to 4096
max_filedesc 4096

Disabling IPv6 and DNSSEC in Bind9 / Ubuntu 16.04

We recently migrated an internal bastion host from Ubuntu 14 to 16.04.  I was able to pull secondary zones, but getting recursion working was a real problem.  The previous one would forward certain zones to other internal servers, and even thought the configuration was the same I was having zero luck:

root@linux:/etc/bind# host test.mydomain.com 127.0.0.1
Using domain server:
Name: 127.0.0.1
Address: 127.0.0.1#53
Aliases:

Host test.mydomain.com not found: 2(SERVFAIL)

I did a tcpdump and discovered the queries were being sent to the intended forwarder just fine and valid IPs being returned:

19:11:24.180854 IP dns.cache-only.ip.46214 > dns.forwarder.ip.domain: 18136+% [1au] A? test.mydomain.com. (77)
19:11:24.181880 IP dns.forwarder.ip.domain > dns.cache-only.ip.46214: 18136 3/0/1 A 10.10.1.2, A 10.10.1.3 (125)

Grasping at straws, I theorized the two culprits could be IPv6 and DNSSec, some Googling indicated it’s a bit confusing on how to actually disable these, but I did find the answer.

Disabling IPv6 and DNSSEC

There were two steps to do this:

In /etc/default/bind9, add -4 to the OPTIONS variable

OPTIONS="-u bind -4"

In /etc/bind/named.conf.options do this

// Disable DNSSEC 
//dnssec-validation auto
dnssec-enable no;

// Disable IPv6
//listen-on-v6 { any; };
filter-aaaa-on-v4 yes;

After restarting BIND with sudo /etc/init.d/bind9 restart now we’re good:

root@linux:/etc/bind# host test.mydomain 127.0.0.1
Using domain server:
Name: 127.0.0.1
Address: 127.0.0.1#53
Aliases:

test.mydomain.com has address 10.10.1.2
test.mydomain.com has address 10.10.1.3