Sorting images into Exif Date Taken folder

So tonight I decided i’d had enough dealing with backups and storage space contraints on my home PC so I figured Google Drive @ USD$4.99 p/month for 100GB, is a bargin for a little less stress in my life. This of course left me wondering what to actually upload.

I picked my photos to start, but was immediately faced with a problem. Well two actually:

  1. A large amount of images all share similar names across various folders
  2. Not all folders have photos sorted by date or by their actual Exif “Date taken” timestamp.
    Most of the photos are all sorted in to the yyy-mm-dd of the day they were extracted from the camera to the PC and over the years, some where just organized where-ever.

Solution?

Script something to fix it!

first thing to do was to rename the files uniquely and then move them into their respective yyyy-mm-dd folder based on the images exif date-taken timestamp.

 
#!/usr/bin/env perl
use 5.010;
use strict;
use warnings;
use autodie;
 
use Path::Class;
use File::Copy;
use Digest::MD5 'md5_hex';
use Image::ExifTool 'ImageInfo';
 
sub rename_by_exif_DateTaken {
# and make the filenames unique
     for my $f ( dir()->children ) {
          next if $f->is_dir;
          my $exif = Image::ExifTool->new;
          $exif->ExtractInfo($f->stringify);
          my $date = $exif->GetValue('DateTimeOriginal','PrintConv');
 
          next unless defined $date;
          $date =~ tr[ :][T-];
 
          my $digest = md5_hex($f->slurp);
          $digest = substr($digest,0,7);
          my $new_name = "$date-$digest.jpg";
 
          unless ( $f->basename eq $new_name ) {
               rename $f => $new_name;
          }
     }
}
 
rename_by_exif_DateTaken ;
 
sub sort_into_date_folders {
# yyyy-mm-dd
  for my $f ( dir()->children ) {
	next if $f->is_dir;
 
	my $exif = Image::ExifTool->new;
	   $exif->ExtractInfo($f->stringify);
 
	my $timestamp = $exif->GetValue('DateTimeOriginal', 'PrintConv');
		next unless defined $timestamp;
		$timestamp =~ tr[ :][T-];
	my ($date, undef) = split /T/, $timestamp;
 
	print "$daten";
 
	if ( ! -d $date ) {
		mkdir($date);
	}
 
	print "moving $f -> $date/$fn";
	move("$f", "$date/$f");
 
  }
}
 
sort_into_date_folders ;

kudos to David Golden for the original inspiration.

check_mk_agent and ESXi 4.1

Decided to add our Esx servers to our check_mk monitoring suite today, ran into two small issues.

1. check_mk_agent uses bash interpreter – ESXi (4.1) uses ash
2. afaict, check_mk_agent relies on xinitd, ESXi uses inetd.

Solution?

Download the check_mk_agent package rpm for esx/linux and extract the usr/bin/check_mk_agent script and usr/bin/waitmax

Edit the first line of check_mk_agent to be

#!/bin/sh

save and exit, then scp both files to /usr/bin on the ESXi server.

ssh ./{waitmax,check_mk_agent} root@esx-host:/usr/bin

Next, scp /etc/services and /etc/inetd.conf from the ESXi server and make the following changes.

./services
Add line:

check_mk 6556/tcp check_mk_agent   # check MK agent

./inetd.conf
Add line:

check_mk stream   tcp   nowait   root   /usr/bin/check_mk_agent check_mk_agent

Upload the files back to the ESXi host and your (almost) done!

This is the basic jist of how to get it working but there’s a far easier way to do this on many hosts. I for one automated the process by creating a payload package, a deployment script and setup ssh keys.

Of course, then next part is to actually make this stuff persistent.

rtorrent and those annoying SSL certificate errors

Got this today,

Tracker: [Peer certificate cannot be authenticated with known CA certificates]

A quick look at the sites certificate and it had expired, so into the source code and created a patch below.

Should also fix self-signed certificates errors.

--- rtorrent-0.8.9.org/src/core/curl_stack.cc   2012-09-15 15:58:54.000000000 +1200
+++ rtorrent-0.8.9.patched/src/core/curl_stack.cc       2012-09-15 15:46:54.000000000 +1200
@@ -52,7 +52,7 @@
   m_handle((void*)curl_multi_init()),
   m_active(0),
   m_maxActive(32),
-  m_ssl_verify_peer(true) {
+  m_ssl_verify_peer(false) {
 
   m_taskTimeout.set_slot(rak::mem_fn(this, &CurlStack::receive_timeout));
 
@@ -165,9 +165,10 @@
   if (!m_httpCaCert.empty())
     curl_easy_setopt(get->handle(), CURLOPT_CAINFO, m_httpCaCert.c_str());
 
-  if (!m_ssl_verify_peer)
+  if (!m_ssl_verify_peer) {
     curl_easy_setopt(get->handle(), CURLOPT_SSL_VERIFYPEER, 0);
-
+    curl_easy_setopt(get->handle(), CURLOPT_SSL_VERIFYHOST, 0);
+  }
   base_type::push_back(get);
 
   if (m_active >= m_maxActive)

Hot-adding memory to a Linux VM

Problem: When I hot-add memory to a linux VM, it doesn’t show up in the system when I type free

Solution:
This has likely been bashed to death and can be readily found on the internet if you search for it, but here’s one more for completeness.

In it’s simplest people just need to refer to the kernel documentation either in the kernel source Documentation/memory-hotadd.txt or online via YAGS (yet another google search)

Here’s a script to “online” any offline memory after you have added it.

#!/bin/bash
 
if [ "$UID" -ne "0" ]; then
 echo -e "You must be root to run this script"
 exit 1
fi
 
for MEMORY in /sys/devices/system/memory/memory*
 do
  if grep -q online "${MEMORY}/state"; then
   echo -e "${MEMORY} is online"
  else
   echo -en "${MEMORY} is offline, bringing online ..."
   echo online > "${MEMORY}/state"
   echo "OK"
  fi
 done

100Mbit Telstraclear – A giant leap forward for NZ residential Internet

I admit that when I saw the press release, I was a little excited and when I saw that the price is about where most ~25Mbit connections are now I was even more ecstatic! $155 gets you 100Mbit cable with 150GB p/month, it even comes with an upgraded Cisco modem with a Gigabit Ethernet LAN connection. OK, granted it’s a bit pointless having a 1Gbit LAN connection to the network when your capped at 100Mbit, but it makes you wonder if this device was selected to mitigate the mass upgrade costs when other speeds become available, perhaps 200Mbit?, 500Mbit? heaven forbid they upgrade to 1Gbit?!

As always, the data caps are still ridiculous. It’s day two of the plan change and upgrade and I’m already 20% through my 150GB allowance. And that’s just from testing and [yet again, another sale on] Steam.

My initial testing shows some sick speeds, anywhere from averaging 6~8MB p/sec to 9.6~9.9 MB p/sec depemding on what your downloading and from where. Speed tests are a mixed bag, most show 70-~80Mbits Download and 7~8 upload, whereas Telstraclears speed test seems to return 99.xMbits p/sec download and 9.xMbits p/sec upload. But as we know, these tests are generally “tweeked” for the best result under best conditions, but don’t always reflect reality.

Time will tell how it goes as the service uptake increases, I’m fortunate that not many in my neighbourhood, or suburb will see the value given the price, but for me, it’s been a long time coming since the days of JetStart to get to here.

ss_rutorrent_speed_tc100mbit

Clear Linux Kernel Memory Caches

Every so often, I find my desktop, well, Firefox that is, a bit sluggish due to the amount of memory being consumed (upwards of 2GB!). The Linux Kernel has very efficient memory management that occasionally frees any cached memory on the system. However it’s not always when you need it.

sync; sysctl -w vm.drop_caches=3

This is effectively the same as calling

sync; echo > /proc/sys/vm/drop_caches

Part of the problem I had was a left over setting from some kernel memory testing I had done some time ago, where I left this parameter set

vm.swappiness=99

This equates to 99% of memory will be cached. For the average person this could be either left to the default value (60) or omitted completely, or set to a lower value to cache less but not too little as to ruin your experience.

Unpack Debian .deb package

I recently made the mistake of forgetting the upgrade process required from Debian Etch to Squeeze which almost trashed a remote server!

Fortunately I only broke dpkg prior to anything else so only required me to unpack the old version over-top of the newer one.  I thought others might like to know this one-liner also.

ar p dpkg_1.13.26_i386.deb data.tar.gz | tar xzv -C /

OCZ Synapse Cache

So I bought the OCZ Synapse Cache 128GB for NZ$299, I had hoped this would help speed up my raid array and grant me maybe another year before I needed to replace my ageing Motherboard, CPU and Memory, I weighed up the cost of the drive-to-performance vs. cost to replace Motherboard, CPU and the RAM and it worked out the SSD performance would be adequate for now.

The Spec of my h/w at the time was

ASUS A8N SLi-SE Motherboard
AMD Athlon64 X2 4400+
Palit GeForce GTX 580 (1.5GB)
4GB 400MHz  DDR RAM
4x WD Raptor 150LFS 10K RPM SATA 2 drives (RAID 0+1)
TT 750W PSU

The marketing and reviews raving about the Synapse, so I bit the bullet and ordered myself one. When it arrived the next day, I was pretty eager to get it in my PC and stress test it with some serious gaming, but it was not mean’t to be.

Installing the Data Plex software revealed the first problem, AHCI wasn’t enabled. So I attempted to remedy this by first configuring windows to support AHCI, then reboioted into the BIOS to find my second problem. It didn’t support  AHCI!

I ended up buying new kit in the end …
Gigabyte 990FXA-UD3  Motherboard
AMD FX8150 Black Edition CPU
Palit GeForce GTX 580 (1.5GB)
2x 4GB Kingston KVR1333MHz CL9 DDR3 RAM
1x 128GB Synapse cache SSD
4x WD Raptor 150LFS 10K RPM SATA 2 drives (RAID 0+1)
TT 750W PSU

I’m a bit of a gamer, so I wanted something that can keep up with most games on regular settings without going overboard, I’m not a purist or a graphics aficionado and prefer a balance between smooth FPS with fair visual effects (although I do hate jaggies!).

After 2-3 days of modding, rebuilding, then rebuilding again because I found better ways to cable route and modify my case to support those “better ways”, I eventually managed tried the synapse.

My first attempt was absolutely crap.  Enabling AHCI support only allows single drives to be used, which defeated the point of it speeding up my raid array!

I gave it go anyway, just to see it would dazzle me with it speed, so I installed Win 7, Installed DataPlex Now, given that it was a new install, I opted to recover some data from my Acronis images rather than download them (My Doc’s, Steam/Origin etc) during this, I left it running overnight that’s when I discovered my second Issue with the OCZ Synapse.

My PC had entered sleep mode, normally this wouldn’t be a problem, however the BIOS defaults disabled any means to wake the PC from this state as it turned off the USB ports!

So I had no choice but to cold-boot the system, Entering the BIOS I managed to tweak the settings such that it wouldn’t happen again, while I was there I updated the BIOS to the latest version for good measure.

Once I exited and rebooted the DataPlex software detected the power loss and presented me the option to recover or disable, I opted to recover, which in a matter of seconds managed to delete my partition table!

You can imagine my frustration, I grabbed my Gentoo live CD and attempted to repair however it appeared even the data on the disk was unrecoverable.

After a break for an hour or so I gave up on the Synapse for a about a week before I tried again, This time I kept my raid array and installed the Synapse SSD via the two extra SATA ports provided via the Marvel SATA controller.  This in itself was a nuisance.  There was no on-board SATA ports for this controller, only the eSATA ports which mean’t providing either an external power supply to the drive or installing the drive internally, connecting it to the PSU and running an external eSATA-to-SATA cable back into the chassis. I opted for the latter as it was the easiest (if not the ugliest).

It seemed to work fine for a few days,things seemed quite snappy and load times were good, I wouldn’t say it was anything to rave about though and that power loss issue really bugged me.

After about three days, I decided I’d pull the power out and see what happens, sure enough the disable/recovery option was displayed, This time I select disable, which then required uninstalling the DataPlex Software, It wouldn’t uninstall, booting into safe mode wouldn’t uninstall it either as it needed MSI service running which won’t run in safe mode.

I think in the end I managed to uninstall it, how exactly escapes me as it was weeks ago now, but in any case, I reinstalled the software and tried the test again, this time using the recovery option.

This option takes FOREVER I think I left it overnight and it was ready the next day, however I have to say, the whole process I wen’t through just wasn’t worth the effort and now the Synapse is used as regular 64GB SSD and I’m happy with that.

I won’t be going down that path anytime soon, maybe I’ll try the Intel equivalent, but that won’t happen anytime soon.

I wouldn’t recommend the synapse to people I liked, but I would recommend SSD’s in general.

How to determine if your disk I/O sucks

If your I/O wait percentage is greater than (1/# of CPU cores) then your CPUs are waiting a significant amount of time for the disk subsystem to catch up.

Run the top command, if CPU I/O wait (wa) is say 13.9% and the server has 8 cores (1/8 cores = 0.125) then using the above statement, this is bad. Disk access may be slowing the application down if I/O wait is consistently around this threshold.