PowerShell to download O365 IP ranges

 
$ipv4CsvFile = "${env:USERPROFILE}\Documents\O365_IPv4_Addresses.csv"
$ipv6CsvFile = "${env:USERPROFILE}\Documents\O365_IPv6_Addresses.csv"
 
[xml]$xml = ( New-Object System.Net.WebClient ).DownloadString( "https://support.content.office.net/en-us/static/O365IPAddresses.xml" )
 
$products = $xml.products.product
 
$ipList = $products.addresslist.Where( { ( $_.type -in ("IPv4","IPv6") ) -and ( $_.address -ne $null ) } )
 
$ipv4 = ($ipList.Where({ $_.Type -eq 'IPv4'})).address
$ipv6 = ($ipList.Where({ $_.Type -eq 'IPv6'})).address
 
$ipv4 | Out-File -Filepath $IPv4CSVFile
$ipv6 | Out-File -Filepath $IPv6CSVFile

megaport-pstools Released

https://bitbucket.org/cbrochere/megaport-pstools/src

megaport-pstools

PowerShell Tools for automation and scripting of Megaport services.

This started life with the purpose of figuring out how one might schedule a bandwidth change on a VXC, but then blew-up into various other tools to simplify other tasks and requests from user, such as exporting and graphing the bandwidth usage, or detecting interface/connection issues.

While the Megaport web UX at https://megaport.al is really great, simple and intuitive, it’s a pain having to click buttons over-and-over – and besides, It ain’t “DevOps-y” cool. There’s always a need for scripted automation with integration with other powershell suites such as the Azure PowerShell Tools.

Why PowerShell? -meh! why not? -Actually, I’ve just been spending a lot of time on Windows lately, running/writing Azure automation scripts, so it was pretty easy to write up a few test scenarios using the API and Invoke-RestMethod. By the time I completed testing 4-5 API endpoints, I was already reusing the majority of the same code, so it pretty much escalated/optimized from there.

Btw – PowerShell works on both Windows, Mac (untested) and Linux, VS Code is pretty and cool too ūüôā

 

Using sftp-server to replace FTP

In my job, data collection and data deliveries are either done via API or FTP. FTP has it’s drawbacks, but ultimately gets the job done. Some of those issues are more specific to the lack of improvements in the technology, efficiency of the protocol, scaling, availability, file-locking and active vs. passive ¬†connections vs. the Firewall security -and so on. Not all are impossible challenges, but unnecessary in today’s cloud orientated technology world.

We still can’t get rid of the ability to transfer exported data and deliver over a file transfer protocol, however, we can change the protocol and the data handler and create a secure method that’s much easier to deal with. ¬†While it’s possible to run FTP and SSL/TLS for security, SFTP provides a much less intrusive replacement.

In a past post, I¬†offered a method which allowed users to create a secondary service that was dedicated to SFTP and not use SSH to connect to a console. later versions of openssh don’t support the same method due to shared memory overlap, but – if you really want to create an isolated and dedicated SFTP service, then consider docker instead. In this guide, I’m going to show how to secure your existing SSH for both remote console and SFTP-only.

Step one: SFTP vs SSH (Shell)

I’m going to refer to SSH as “shell access” and SFTP as just “sftp without shell access”. ¬†I also use debian, so you’ll just a have to adapt for your own distro.

My /etc/ssh/sshd_config looks like this:

Listen 7387
Listen 22

...

# applies to members of the 'sftponly' group and are 
# coming in on port 7387
Match Group sftponly LocalPort 7387
      ChrootDirectory %h
      X11Forwarding no
      AllowTcpForwarding no
      ForceCommand internal-sftp -u 000

External firewall port-forwards port 7387 only.
I leave it up to you if you want to use password or keys.

Step two: add sftp as a shell

echo "/usr/lib/openssh/sftp-server" >> /etc/shells

Step three: Create the group and first user

Create the sftponly group; I like mine to be system accounts, but it does not have to be.
groupadd --system sftponly

Create the user:
~# useradd --shell /usr/lib/openssh/sftp-server --user-group --home-dir /path/to/sftproot/userdir --groups sftponly

Annoyingly, the sftp server chrootdir directive requires the sftp-root is owned by root.
~# chown root:root /path/to/sftproot/userdir

Extending Security

I recommend that users still use firewall ACL’s to limit who can access their SFTP server in the first place, but if you provide a service to anyone-anywhere, then consider basic isolation practices and enhance the security further.

  1. Isolate the host serving sftp service
  2. Run read-only and restricted r/w capability (docker is good for this)
  3. Disable password authentication, use RSA certificates with a password in it.
  4. If you need to use passwords make sure you enforce complex and long passwords! Here’s a neat one-liner for password generationopenssl rand -base64 15
  5. Can you also support a two-factor method? Google-auth is pretty easy to integrate with and there are more options available if you look around. A nice cheap two-factor is to use both an rsa pubkey with a password and the user password.
    AuthenticationMethods "publickey,password publickey,keyboard-interactive"

 

VMware VMW_RR_PSP vSphere/ESX 6.x

So you want to change the ESX pathing select to your datastores from MRU to RR, but the docs say you have to reboot ALL your hosts?

Recently I raised a request for my IaaS provider to change our pathing selection from the default MRU to RR.  The tech/helpdesk person did some google searching on topic and got confused by the content, where the VM KB articles said to reboot and other articles were unclear, and so raised a support ticket with VMware and supposedly with HP to clarify.  The response I got back from my IaaS provider was that both HP/VMware came back and said must reboot.

So I called bullshit on that, in fact, that’s the cheap/lazy answer.

The way it works is that, you can create a policy to map VMW_SATP_ALUA to VMW_RR_PSP, ¬†and it’s going to automatically apply it to any new devices being added –¬†it only¬†affects new storage sure, existing storage won’t change without a) ¬†host reboot or b) manually setting¬†the paths on each¬†LUN. A reboot is just the brute-force approach to save clicks.

I fully expected VMware Support to comeback with the “reboot your computer” answer, I¬†got that same answer for many issues over the years since 3.5 ¬†(mostly that was about all you could do tbh), I was a bit surprised by HP also stating this given their own HP 3PAR + VMware 6 Best Practice Guide gives both options – Reboot or manually set the paths ‚Ķ

This is how I have always done it since ESX 3.5

Now, if I¬†had 10 or more ESX hosts, then yeah sure, let’s reboot to save clicks!

It’s not to say that rebooting isn’t a valid choice. If you make quite a few changes across a system, a reboot might be needed to weed out quirks. In my case, it was impractical to do that due to¬†various reasons and also unnecessary to perform vmotions across 100 or so VMs over a few days for what could be done in a few minutes.

Apt repo using HTTPS

Following on from my post on how to create your own SSL Certificate Authority, I’ve also started doing this for custom apt repos where we allow public repos over http and private repos over https (+ basic-auth).

    To do this, you effectively need 3(+1) things

  1. apt-transport-https package on the client
  2. Install your Root CA Certificate, so you can sign your own certificates and remove certificate errors OR check out letsencrypt.org OR you can buy a valid one from a proper CA and be done with it.
  3. Setup https in the web server.
  4. We use basic-auth over https, so a there’s a fourth step.

  5. configure basic auth in /etc/apt/sources.list.d/custom.list

I won’t cover the details on configuring Apache or creating an SSL Root CA or creating your own repo, I’ll assume you already have that figured out.

So here’s the condensed tasks.

  1. Create take your root CA cert and key
  2. Copy the cert to destination server (that is connecting to your repo). This is usually in /usr/share/ca-certificates/somename/my-root-ca.crt
  3. On the the client, update the CA list dpkg-reconfigure ca-certificates
  4. On the client, install apt-transport-https.
    apt-get install apt-transport-https
  5. In a apt sources list file (i prefer to use /etc/sources.list.d/.list), add the repo.
    deb https://your.reposerver.com/deb stable main or with basic-auth deb https://user:pass@your.reposerver.com/deb stable main

See it work with apt-get update

Soniq 40″ E40Z10A-NZ as a PC monitor

E40Z10A-NZ
So why use a TV for PC monitor you ask? I fall into the “because I can” category. I upgraded my lounge from a cheap Soniq to an LG 3D Smart TV, and so I had a TV sitting around collecting dust.

Motivation to reuse the TV came while I was mid-DIY office renovating. I figured I’d wall mount my 27″ AOC and decided to get a longer wall mount for “future proofing” should I upgrade the screen. I used the Soniq’s size (which is 46″ in diagonal size, 40″ screen) as the template for position given it’s larger size, once it was on the wall though, I couldn’t resist leaving it to see how looked at the end with all the bench-tops in place.

Once the TV wall-mount was secured into place, all cables neatly aligned and bench-top placed back into position. I hung the screen to the wall plugged it in and took a step back bask in my achievement … I burst out laughing, it was a ridiculous sight to see at first, but I quickly started to geek-out at it.

Anyone who has ever plugged a TV into PC, will tell you there are two main issues.

  1. Overscan.
  2. Poor text quality.

If you want to use your TV as a PC monitor whether it’s for HTPC, Gaming rig or just because cause you can, then depending on the TV, you may not have an obvious way to disable overscan. In my case, my TV is a Soniq 40″ E40Z10A-NZ which falls into the category of non-obvious method as there is no option in the TV menu.

Fortunately after a little digging, I found the factory menu code for my Soniq LED TV.

Press “Source”, then enter 200912

Once in, I was able to select and adjust the Overscan values to Zero and retire GPU based scaling. ūüôā

Now if I can just get text to look a lot less shit …

Update: Switching to VGA solved this problem and also supports wake-up. HDMI on the other hand, the monitor goes to sleep but won’t wake up.

ESXi Guest e1000 tweaks

Windows

Refer to this KB article @ VMWare
Poor network performance on Windows 2008 Server virtual machine

Linux

    Disable TCP Segmenation Offload
    ethtool -K eth0 tso off
    ethtool -K eth1 tso off
    Increase Descriptors
    ethtool -G eth0 rx 4096 tx 4096
    ethtool -G eth1 rx 4096 tx 4096

    You can either add these to rc.local (or your distros equivalent) or my preference is to create a package (rpm/deb) so you can push/pull these to systems easily.

    You could also create /etc/modprobe.d/e1000.conf and add the below

    alias eth0 e1000
    options e1000 RxDescriptors=4096,4096 TxDescriptors=4096,4096

    and/or in /etc/network/interfaces:

    auto eth0
    iface eth0 inet dhcp
         up ethtool -G eth0 rx 4096 tx 4096
         up ethtool -K eth0 tso off

If you are still experiencing issues, you may just need to bite the bullet and use the vmxnet3 driver which has eliminated packet-loss/drops in the large majority of cases.

check_mk local check SSL certificate expiration

Was getting sick of tracking certificates expirations in confluence and setting reminders in my calendars, so I thought, Hey, why not make the monitoring system do this?

#!/usr/bin/perl
 
use strict;
use warnings;
use Net::SSL::ExpireDate;
use DateTime;
 
my $daysleft ;
my $endDate ;
my $dtnow = DateTime->now ;
my $status = { 'txt' =>; 'OK', val => 0 };
 
my @hosts ;
 
push @hosts, 'www1.exmaple.com'
 
foreach my $host (@hosts) {
        check_ssl_certificate($host);
}
 
sub check_ssl_certificate {
        my $host = shift;
        my $ed = Net::SSL::ExpireDate->new( https => "$host:443" ) ;
        if ( defined $ed->expire_date ) {
                $endDate = $ed->expire_date ;
                if ( $endDate >= DateTime->now ) {
                        $daysleft = $dtnow->delta_days($endDate)->delta_days ;
                        if ( $daysleft < 90 ) {
                                 $status = { 'txt' => 'WARNING', 'val' => 1 } ;
                        } elsif ( $daysleft <= 45 ) {
                                 $status = { 'txt' => 'CRITICAL', 'val' => 2 } ;
                        } else {
                                $status = { 'txt' => 'OK', 'val' => 0 } ;
                        }
                } else {
                        $status = { 'txt' => 'CRITICAL', 'val' => 2 } ;
                }
                print "$status->{val} SSL_Certificate_$host Days=$daysleft; $status->{txt} - $host Expires on $endDate ($daysleft days)n";
        }
}
 
exit($?);

Self-Signed Wildcard with Trusted Root CA

I got fed up getting certificate warnings when opening browsers on various devices to local servers running under my private domains, so I decided to fix the problem with my own root CA.

This is still pretty annoying to set up when I wipe a PC, but is way more practical long term.

So here’s how I did it ūüôā

Create the root CA

  1. Create a private key
    $ openssl genrsa -out rootCA.key 2048
  2. Create the certificate (root CA’s are self-signed certificates btw)
    $ openssl req -x509 -new -nodes -key rootCA.key -days 3653 -out rootCA.pem

I’m not going to bother encrypting the certificate (refer: -nodes parameter) it’s for private use internally.

Create the wildcard certificate

Here’s the best part!

  1. Create a file named ${domain}.cnf with the following
    [req]
    req_extensions = v3_req
     
    [v3_req] 
    keyUsage = keyEncipherment, dataEncipherment
    extendedKeyUsage = serverAuth
    subjectAltName = @alt_names
     
    [alt_names]
    DNS.1 = ${domain}
    DNS.2 = *.${domain}
    DNS.3 = ${hostName}
    DNS.4 = ${otherHostName}
  2. Create a key for signing
    openssl genrsa -out ${domain}.key 2048
  3. Create a Certificate Signing Request
    openssl req -new -key ${domain}.key -out ${domain}.csr

    When presented with “Common Name”, enter

    *.${domain}

    eg: *.blog.geek.nz

  4. Sign the request against the root CA
    $ openssl x509 -req -days 3650 -in ${domain}.csr 
    -CA rootCA.pem -CAkey rootCA.key -CAcreateserial 
    -out ${domain}.crt -extfile ${domain}.cnf

    You’ll note the -CAcreateserial parameter, this only needs to be defined once – next time you create a certificate change the

    -CAcreateserial

    to

    -CAserial rootCA.srl

Copy your rootCA.crt to a usb stick and plug into your PC’s

In Windows, double click the rootCA.crt and add to the “Trusted Root Certificate Authorities” store. Firefox uses it’s own store, so you’ll have to add it via Options->Advanced->Certificates->Authorities->Import

For linux browsers – most use their own stores, so check the docs, should be in similar places as firefox.

For Mac, I dunno, google it.

EDIT: You could also just use letsencrypt.org, create the certs for apache and then convert to pfx for IIS/Azure