Using sftp-server to replace FTP

In my job, data collection and data deliveries are either done via API or FTP. FTP has it’s drawbacks, but ultimately gets the job done. Some of those issues are more specific to the lack of improvements in the technology, efficiency of the protocol, scaling, availability, file-locking and active vs. passive  connections vs. the Firewall security -and so on. Not all are impossible challenges, but unnecessary in today’s cloud orientated technology world.

We still can’t get rid of the ability to transfer exported data and deliver over a file transfer protocol, however, we can change the protocol and the data handler and create a secure method that’s much easier to deal with.  While it’s possible to run FTP and SSL/TLS for security, SFTP provides a much less intrusive replacement.

In a past post, I offered a method which allowed users to create a secondary service that was dedicated to SFTP and not use SSH to connect to a console. later versions of openssh don’t support the same method due to shared memory overlap, but – if you really want to create an isolated and dedicated SFTP service, then consider docker instead. In this guide, I’m going to show how to secure your existing SSH for both remote console and SFTP-only.

Step one: SFTP vs SSH (Shell)

I’m going to refer to SSH as “shell access” and SFTP as just “sftp without shell access”.  I also use debian, so you’ll just a have to adapt for your own distro.

My /etc/ssh/sshd_config looks like this:

Listen 7387
Listen 22

...

# applies to members of the 'sftponly' group and are 
# coming in on port 7387
Match Group sftponly LocalPort 7387
      ChrootDirectory %h
      X11Forwarding no
      AllowTcpForwarding no
      ForceCommand internal-sftp -u 000

External firewall port-forwards port 7387 only.
I leave it up to you if you want to use password or keys.

Step two: add sftp as a shell

echo "/usr/lib/openssh/sftp-server" >> /etc/shells

Step three: Create the group and first user

Create the sftponly group; I like mine to be system accounts, but it does not have to be.
groupadd --system sftponly

Create the user:
~# useradd --shell /usr/lib/openssh/sftp-server --user-group --home-dir /path/to/sftproot/userdir --groups sftponly

Annoyingly, the sftp server chrootdir directive requires the sftp-root is owned by root.
~# chown root:root /path/to/sftproot/userdir

Extending Security

I recommend that users still use firewall ACL’s to limit who can access their SFTP server in the first place, but if you provide a service to anyone-anywhere, then consider basic isolation practices and enhance the security further.

  1. Isolate the host serving sftp service
  2. Run read-only and restricted r/w capability (docker is good for this)
  3. Disable password authentication, use RSA certificates with a password in it.
  4. If you need to use passwords make sure you enforce complex and long passwords! Here’s a neat one-liner for password generationopenssl rand -base64 15
  5. Can you also support a two-factor method? Google-auth is pretty easy to integrate with and there are more options available if you look around. A nice cheap two-factor is to use both an rsa pubkey with a password and the user password.
    AuthenticationMethods "publickey,password publickey,keyboard-interactive"

 

Apt repo using HTTPS

Following on from my post on how to create your own SSL Certificate Authority, I’ve also started doing this for custom apt repos where we allow public repos over http and private repos over https (+ basic-auth).

    To do this, you effectively need 3(+1) things

  1. apt-transport-https package on the client
  2. Install your Root CA Certificate, so you can sign your own certificates and remove certificate errors OR check out letsencrypt.org OR you can buy a valid one from a proper CA and be done with it.
  3. Setup https in the web server.
  4. We use basic-auth over https, so a there’s a fourth step.

  5. configure basic auth in /etc/apt/sources.list.d/custom.list

I won’t cover the details on configuring Apache or creating an SSL Root CA or creating your own repo, I’ll assume you already have that figured out.

So here’s the condensed tasks.

  1. Create take your root CA cert and key
  2. Copy the cert to destination server (that is connecting to your repo). This is usually in /usr/share/ca-certificates/somename/my-root-ca.crt
  3. On the the client, update the CA list dpkg-reconfigure ca-certificates
  4. On the client, install apt-transport-https.
    apt-get install apt-transport-https
  5. In a apt sources list file (i prefer to use /etc/sources.list.d/.list), add the repo.
    deb https://your.reposerver.com/deb stable main or with basic-auth deb https://user:pass@your.reposerver.com/deb stable main

See it work with apt-get update

check_mk local check SSL certificate expiration

Was getting sick of tracking certificates expirations in confluence and setting reminders in my calendars, so I thought, Hey, why not make the monitoring system do this?

#!/usr/bin/perl
 
use strict;
use warnings;
use Net::SSL::ExpireDate;
use DateTime;
 
my $daysleft ;
my $endDate ;
my $dtnow = DateTime->now ;
my $status = { 'txt' =>; 'OK', val => 0 };
 
my @hosts ;
 
push @hosts, 'www1.exmaple.com'
 
foreach my $host (@hosts) {
        check_ssl_certificate($host);
}
 
sub check_ssl_certificate {
        my $host = shift;
        my $ed = Net::SSL::ExpireDate->new( https => "$host:443" ) ;
        if ( defined $ed->expire_date ) {
                $endDate = $ed->expire_date ;
                if ( $endDate >= DateTime->now ) {
                        $daysleft = $dtnow->delta_days($endDate)->delta_days ;
                        if ( $daysleft < 90 ) {
                                 $status = { 'txt' => 'WARNING', 'val' => 1 } ;
                        } elsif ( $daysleft <= 45 ) {
                                 $status = { 'txt' => 'CRITICAL', 'val' => 2 } ;
                        } else {
                                $status = { 'txt' => 'OK', 'val' => 0 } ;
                        }
                } else {
                        $status = { 'txt' => 'CRITICAL', 'val' => 2 } ;
                }
                print "$status->{val} SSL_Certificate_$host Days=$daysleft; $status->{txt} - $host Expires on $endDate ($daysleft days)n";
        }
}
 
exit($?);

Self-Signed Wildcard with Trusted Root CA

I got fed up getting certificate warnings when opening browsers on various devices to local servers running under my private domains, so I decided to fix the problem with my own root CA.

This is still pretty annoying to set up when I wipe a PC, but is way more practical long term.

So here’s how I did it 🙂

Create the root CA

  1. Create a private key
    $ openssl genrsa -out rootCA.key 2048
  2. Create the certificate (root CA’s are self-signed certificates btw)
    $ openssl req -x509 -new -nodes -key rootCA.key -days 3653 -out rootCA.pem

I’m not going to bother encrypting the certificate (refer: -nodes parameter) it’s for private use internally.

Create the wildcard certificate

Here’s the best part!

  1. Create a file named ${domain}.cnf with the following
    [req]
    req_extensions = v3_req
     
    [v3_req] 
    keyUsage = keyEncipherment, dataEncipherment
    extendedKeyUsage = serverAuth
    subjectAltName = @alt_names
     
    [alt_names]
    DNS.1 = ${domain}
    DNS.2 = *.${domain}
    DNS.3 = ${hostName}
    DNS.4 = ${otherHostName}
  2. Create a key for signing
    openssl genrsa -out ${domain}.key 2048
  3. Create a Certificate Signing Request
    openssl req -new -key ${domain}.key -out ${domain}.csr

    When presented with “Common Name”, enter

    *.${domain}

    eg: *.blog.geek.nz

  4. Sign the request against the root CA
    $ openssl x509 -req -days 3650 -in ${domain}.csr 
    -CA rootCA.pem -CAkey rootCA.key -CAcreateserial 
    -out ${domain}.crt -extfile ${domain}.cnf

    You’ll note the -CAcreateserial parameter, this only needs to be defined once – next time you create a certificate change the

    -CAcreateserial

    to

    -CAserial rootCA.srl

Copy your rootCA.crt to a usb stick and plug into your PC’s

In Windows, double click the rootCA.crt and add to the “Trusted Root Certificate Authorities” store. Firefox uses it’s own store, so you’ll have to add it via Options->Advanced->Certificates->Authorities->Import

For linux browsers – most use their own stores, so check the docs, should be in similar places as firefox.

For Mac, I dunno, google it.

EDIT: You could also just use letsencrypt.org, create the certs for apache and then convert to pfx for IIS/Azure

check_mk_agent and ESXi 4.1

Decided to add our Esx servers to our check_mk monitoring suite today, ran into two small issues.

1. check_mk_agent uses bash interpreter – ESXi (4.1) uses ash
2. afaict, check_mk_agent relies on xinitd, ESXi uses inetd.

Solution?

Download the check_mk_agent package rpm for esx/linux and extract the usr/bin/check_mk_agent script and usr/bin/waitmax

Edit the first line of check_mk_agent to be

#!/bin/sh

save and exit, then scp both files to /usr/bin on the ESXi server.

ssh ./{waitmax,check_mk_agent} root@esx-host:/usr/bin

Next, scp /etc/services and /etc/inetd.conf from the ESXi server and make the following changes.

./services
Add line:

check_mk 6556/tcp check_mk_agent   # check MK agent

./inetd.conf
Add line:

check_mk stream   tcp   nowait   root   /usr/bin/check_mk_agent check_mk_agent

Upload the files back to the ESXi host and your (almost) done!

This is the basic jist of how to get it working but there’s a far easier way to do this on many hosts. I for one automated the process by creating a payload package, a deployment script and setup ssh keys.

Of course, then next part is to actually make this stuff persistent.

rtorrent and those annoying SSL certificate errors

Got this today,

Tracker: [Peer certificate cannot be authenticated with known CA certificates]

A quick look at the sites certificate and it had expired, so into the source code and created a patch below.

Should also fix self-signed certificates errors.

--- rtorrent-0.8.9.org/src/core/curl_stack.cc   2012-09-15 15:58:54.000000000 +1200
+++ rtorrent-0.8.9.patched/src/core/curl_stack.cc       2012-09-15 15:46:54.000000000 +1200
@@ -52,7 +52,7 @@
   m_handle((void*)curl_multi_init()),
   m_active(0),
   m_maxActive(32),
-  m_ssl_verify_peer(true) {
+  m_ssl_verify_peer(false) {
 
   m_taskTimeout.set_slot(rak::mem_fn(this, &amp;CurlStack::receive_timeout));
 
@@ -165,9 +165,10 @@
   if (!m_httpCaCert.empty())
     curl_easy_setopt(get-&gt;handle(), CURLOPT_CAINFO, m_httpCaCert.c_str());
 
-  if (!m_ssl_verify_peer)
+  if (!m_ssl_verify_peer) {
     curl_easy_setopt(get-&gt;handle(), CURLOPT_SSL_VERIFYPEER, 0);
-
+    curl_easy_setopt(get-&gt;handle(), CURLOPT_SSL_VERIFYHOST, 0);
+  }
   base_type::push_back(get);
 
   if (m_active &gt;= m_maxActive)

Hot-adding memory to a Linux VM

Problem: When I hot-add memory to a linux VM, it doesn’t show up in the system when I type free

Solution:
This has likely been bashed to death and can be readily found on the internet if you search for it, but here’s one more for completeness.

In it’s simplest people just need to refer to the kernel documentation either in the kernel source Documentation/memory-hotadd.txt or online via YAGS (yet another google search)

Here’s a script to “online” any offline memory after you have added it.

#!/bin/bash
 
if [ "$UID" -ne "0" ]; then
 echo -e "You must be root to run this script"
 exit 1
fi
 
for MEMORY in /sys/devices/system/memory/memory*
 do
  if grep -q online "${MEMORY}/state"; then
   echo -e "${MEMORY} is online"
  else
   echo -en "${MEMORY} is offline, bringing online ..."
   echo online &gt; "${MEMORY}/state"
   echo "OK"
  fi
 done

Clear Linux Kernel Memory Caches

Every so often, I find my desktop, well, Firefox that is, a bit sluggish due to the amount of memory being consumed (upwards of 2GB!). The Linux Kernel has very efficient memory management that occasionally frees any cached memory on the system. However it’s not always when you need it.

sync; sysctl -w vm.drop_caches=3

This is effectively the same as calling

sync; echo > /proc/sys/vm/drop_caches

Part of the problem I had was a left over setting from some kernel memory testing I had done some time ago, where I left this parameter set

vm.swappiness=99

This equates to 99% of memory will be cached. For the average person this could be either left to the default value (60) or omitted completely, or set to a lower value to cache less but not too little as to ruin your experience.

Unpack Debian .deb package

I recently made the mistake of forgetting the upgrade process required from Debian Etch to Squeeze which almost trashed a remote server!

Fortunately I only broke dpkg prior to anything else so only required me to unpack the old version over-top of the newer one.  I thought others might like to know this one-liner also.

ar p dpkg_1.13.26_i386.deb data.tar.gz | tar xzv -C /

How to determine if your disk I/O sucks

If your I/O wait percentage is greater than (1/# of CPU cores) then your CPUs are waiting a significant amount of time for the disk subsystem to catch up.

Run the top command, if CPU I/O wait (wa) is say 13.9% and the server has 8 cores (1/8 cores = 0.125) then using the above statement, this is bad. Disk access may be slowing the application down if I/O wait is consistently around this threshold.