Home Automation Overview

Ever since I was little I have had (on top of my love for computers in general) a fascination for home automation. In The Netherlands we had a ‘House of the Future’, and my dad got me the accompanying book, which I loved to read and scan through over and over again.

Fast forward to today and it seems, finally, the future is here. Sure, things were possible before, but only if you were either willing to pay big bucks for a rather closed system, or if you were willing to go all DIY (and still pay quite some bucks). Now, especially if you’re not afraid of a bit of tinkering and perhaps a bit of DIY, a lot is possible and kind of affordable. Moreover, a lot of things are inter-operable or can be opened up, which is especially fun if you have a programming background.

Everything seems to come together, many companies develop devices, stores are putting them on the shelves, ads are on TV, WiFi, broadband and 3G are (in western world) ominous, small computers are cheaply available and everyone always has a powerful internet-connected computer with build in touch screen at hand.

From a birds eye view, what are some of the currently available options and exciting features?

433Mhz
In The Netherlands, a company called ‘Klik-aan-klik-uit’ came up with a line of cheap actuators and sensors that operate together wirelessly on the 433Mhz channel. It provides a nice entry to the world of Home Automation, but this cheap price comes with some downsides: communication is ‘send-and-forget’. Sensors only send and actuators only receive so missing a radio message can result in inconsistent situations. Also there’s no security at all.

Z-Wave
Z-Wave is a wireless protocol operating on 868.42 MHz (Europe) or 908.42 MHz (USA). Although it’s a proprietary standard licensed by Sigma Designs, other companies are also welcome to build their devices around it. This has resulted in a broad range of sensors and actuators from different brands, all operating together. Z-Wave is bi-directional, and uses a mesh protocol where data can hop a few times before it reaches its destination, enabling a network to span more than the square of the 1-to-1 radio range. Z-Wave requires a central controller, of which there are plenty. Z-Wave.me offers RaspberryPi and USB controller modules, which DIY people can use to interact with the Z-Wave network.

Zigbee
Zigbee is, in contrast to Z-Wave, an open standard. It runs on the 2.4Ghz frequency (same as WiFi and Bluetooth) and also uses mesh technology. Just like Z-Wave, it requires a central unit, which is called a ‘coordinator’. At first glance, at least in The Netherlands, there’s less devices available for Zigbee, and interoperability is not always straightforward (see Hue below). Operating with Zigbee devices can be done using a custom coordinator, such as the XBee shield for Arduino.

WiFi
WiFi requires more power than Zigbee and Z-Wave, offers higher data rates but no mesh technology (so all devices must be in range of a router, although star designs can of course be made using standard ethernet). Devices can use a lot of readily available protocols but also have to worry about different standards, network architectures, and firewalls in order to operate together. A ‘dial out’ solution can be used where devices find each other at a central server (of the manufacturer) on the internet. This brings of course privacy and security issues.

Because of all this, WiFi is less practical for smaller sensors and actuators that need to be placed in great number (ie per lamp, per door, per room), but more so for devices such as the Nest (thermostat). Interacting with these devices will be different on the software level for every manufacturer (Nest, for instance, has an API available).

X10
X10 is a protocol that a bit older than the others, and original ran over the power lines (on the same lines as the power itself), although it is also used on wireless channels. Because of its age it is a bit slow and limited, especially compared to Z-Wave and Zigbee. It has a more ‘industrial’ user base, and I have not really seen consumer products with this protocol like I have seen for the others.

Hue
Turning lights on and off over AC current (230V or 110V) is quite straightforward both mechanically (wall switch) and automatically (relais). Dimming them, without side effects, humming noises or other artefacts is a whole different ballgame. This holds true even for LED’s, as long as the dimming is applied already on the AC line. It’s much better to use DC lamps and dim ‘after’ the AC-DC conversion. Since most houses run AC over the wires up unto the lamp, this poses a problem (because this only allows dimming over AC, which is more difficult).

The Philips Hue lamp solves this by combining LED lighting, AC-DC conversion, dimming and wireless control all inside a single bulb that is powered from the E-27 socket. Hue uses the Zigbee protocol, but a variation of it, which makes interoperating with it on the Zigbee level impossible or impractical (depending on hardware / software versions and who you ask). Instead, one can use a Hue bridge and connect to that over the local ethernet network. It has a REST API available for that, including various SDK’s and extensive documentation.

Hue also has switches and PIR sensors available, but sadly there is still no option in the API to react on their events (except for continuously polling their state, but I don’t think the bridge can handle polling fast enough to monitor multiple switches in a way that makes swift responses possible).

Control center
So now you have these devices in your house, how to interconnect them? Well, some solutions provide their own (simple) ways to ‘wire’ your wireless devices together. The Dutch 433Mhz solution ‘Klik-aan-klik-uit’ for instance has a way to pair actuators and sensors directly together, and the Philips Hue system has an app that both controls and programs the system through the bridge.

One step more ‘in control’ would be to buy a device that acts as a ‘heart’ of your system. Inside will be a computer running some pre-installed software, probably with a web interface through which you can configure your system and make rules for it. Some of these central units only work with a single protocol, such as the BeNext with Z-Wave, others can connect to devices with different protocols.

One step further is to ‘build’ the device from the previous step yourself (well, combine a few hardware parts, such as a Raspberry Pi and a Z-Wave module) and run some open source software (such as Domoticz) on it to basically make the off-the-shelve device described in the previous paragraph. Different software modules can be also added, for instance to integrate the Hue lamps.

Myself I am going yet one more step further, because I’d rather program than tinker with existing software that more often than not misses exactly that feature that I would like to see. Therefore, I’ve setup a Raspberry Pi a MQTT broker. This is a lightweight messaging protocol to which others can subscribe and publish. All I need is to make bridges between every different platform and MQTT, and some software that will do some reasoning based on one MQTT subscription and publish the result to another (I could use Node-RED for that, but perhaps I’ll just build it myself). There’s already quite some building blocks available, such as free apps that report your phone’s location to an MQTT server of your choice, offer (Apple) Home Kit integration, C libraries for Arduino and the likes, etc. I’ll describe this setup in more detail in a separate post.

Voice recognition
To bring Star Trek, and probably many other SciFi worlds, be bit more to life, controlling your home with your voice would be a cool thing! There’s roughly two ways to get the audio in: nearby mic (ie your phone) and far-field mics (that hear you from across the room). There’s also roughly two ways to process the audio: locally (in your home), or remotely (at the servers of a company you may or may not fully trust). Currently not all combinations of the above are readily available.

Picking up your voice throughout the room over background noise has currently (to my knowledge) only been demonstrated with the Amazon Echo (and Echo Dot), and the Google Home. Both process data remotely, and have mics always on, which may be a privacy concern. I for one would not mind, but they’re not yet available here.

Voice recognition on your phone is much less cool. Although ‘Ok Google’, may have some custom integration options, Siri comes equipped for home automation out of the box, as long as your setup supports Home Kit. Another solution I want to look into is Tasker (an Android app), combined with plugins for voice recognition and MQTT messaging.

In conclusion
It’s clear that a whole range of home automation devices are flooding the market. You can either lock yourself into one brand (such as Philips Hue), one protocol (such as Z-Wave) or – with a bit more technical knowledge – build a system that integrates multiple systems. Especially the latter approach is in my opinion now, and only just now, feasible. Not being locked in and being able to add a bit of your own hacking and tinkering to a system that otherwise consists of off-the-shelve components convinced me to, after all these years, to make my home into what in my childhood was predicted as the future.

Status update

Well, that didn’t last long; less than a year and like a handful of posts later, this blog, too, came to a grinding halt. One has to give credit to those who do manage to keep the posts coming and build up an audience. Especially those doing it on the side.

But well, as is my philosophy with all the new stuff that I start full of enthusiasm (that I know is going to wind down at some point), it’s not about starting new things, and it’s not about keeping the habit going. It’s about picking up after dropping down (which is going to happen, often sooner, sometimes later). To quote the late Aaliyah (paraphrasing others) ‘If at first you don’t succeed dust yourself off and try again’.

So here we go, I just removed the layer of dust on this blog, starting with redacting 2000 (!) comments that were 99.9% spam and updating WordPress (had to use the command line utility and manually change some permissions, but all notifications are gone from the dashboard now).

I noticed some of the posts I did before deserve a follow up. Also, the name of this blog does currently not align with blogging plans I have. Although I’m still working as a mobile developer, it’s mostly for customers and ‘just make it work’, so blog-worthy endeavours are sparse. Instead, I’ve some new hobbies. I’m now figuring out so much home automation stuff, that I feel there’s lot’s of material there. Also, I’ve picked up my aquarium hobby after over a decade, which may be food for some posts as well.

From what I’ve learned from the previous burst of effort, single blog posts can be enough to get visitors to your blog (much easier to achieve than getting regular readers). In that respect, Google is your friend. This blog, and even one that is abandoned since 2012, as well as a few tiny one-off single subject side projects, still bring in a few visitors a day (although I’m not sure which part of that is referral spam or anything other irrelevant).

But then again, perhaps this post will be followed by another long silence. Only time will tell.

iOS Localization, some reflection and a hack

As an app developer, it can be quite effective to localize your apps (for this article, we focus on language), especially if you speak both English and your native tongue. At Japps, this is exactly what we do: localize at least to English and Dutch. Both iOS and Android provide means for this, although lozalizing the text on UI components is, frankly, a bit easier on Android. On iOS you are basically forced to duplicate your entire UI, which makes maintenance tricky (basically this violates the DRY principle).

Apple’s solution (prior to iOS 6)

Sadly, there are no real solutions available yet, although there are tools that speed up the process. The steps boil down to: duplicate Storyboard / nib files, extract strings from one language, translate strings into another and put them back (both using a command line tool). Any UI changes will require a repetition of these steps – however you can take a shortcut here by reusing most of the earlier translations. The great thing is that you can adjust your UI (sizes) to the different word lengths, but this is not always a big issue.

Alternative solution

Another solution that has been proposed is transferring the translation proces entirely to code by creating an outlet for each control that has text on it (see this tutorial). The downside is that that it requires quite some code for every new view. Furthermore, the texts in IB / storyboard will become totally meaningless, which may be confusing. To prevent this, they could be added just for clarity, but they’d still need to be defined in the strings file, which would again be non-DRY solution, and extra work.

Base Internationalization (iOS 6)

Luckily, Base Internationalization is coming, and it seems this will provide a KISS solution to localization on iOS. Auto layout could then ensure UI elements adapt their size and location to the length of the strings in the current language. This won’t work on iOS 5 though, so we’ll have to wait a bit before it is ‘acceptable’ to stop supporting the iOS 5 users.

Our solution (for the time being)

In the mean time, we crafted a bit of code that basically loops recursively over all views, checks the strings for square brackets. If anything is found like ‘[Blah]’ it is replaced by ‘BlahTranslated’, or ‘Blah’ if no translation was found. The latter means that for the ‘original’ language (ie the language that is used inside IB), no or only a few entries have to be added to Localizable.strings. For other (newly added) languages, everything can be added into one Localizable.strings file. Alongside, the UI in IB keeps being fairly understandable (since all texts are still relevant, albeit wrapped in square brackets).

This was implemented as ‘categories’ for NSString and UIView. In the latter case, subviews are checked recursively, and for different types of view (label, button, etc), slightly different steps are taken. The only thing that needs to be done is for every view that is loaded from IB  (in view controller’s viewDidLoad, or after manually loading the view from a nib) lngfkt.h needs to be included and [theTopView lngfk] needs to be called. In views that are instantiated from code NSLocalizedString can be used like ‘normal’.

The code

lngfkt.h

lngfkt.m

There’s a fair amount of edge cases that are not covered by this approach. Some just didn’t pop up yet (the above series of if statements could probably cover more UIView subclasses), others need some more trickery.

For navigation bars, the lngfk method has to be called separtely: [self.navigationController.navigationBar lngfk]. For static table headers and cell views, a bit of code is required in the class that implements the UITableViewDelegate class (the UITableViewController for instance):

The pitfalls

It is a hack. I doesn’t even really adhere to any naming conventions. We used it and found it a fair alternative compared to the other options. The code is posted just in case others like this idea as well. Please use it at your own risk. Also, this happens on runtime, which costs time. Not notably, in our case, but it could be.

 

iOS Core Plot (minimal example)

At our company, Japps, data plays an important role. We wanted to present graphs to the user inside our iOS apps. Luckily there is Core Plot (there’s other options, but we like this one). It works like a charm, it is extremely flexible, and it is open source (which is useful if you want even more flexibility or want to debug). There is also quite some reference code and Q&A available. However, I could not find a minimal example, so I created one.

Setting up shop

I created a new project from scratch, using the Single View template. I included Storyboard and ARC. Then I downloaded and installed Core Plot (I went for the static library approach).

In Storyboard, the class of the main view controller was changed to ‘CorePlotExampleViewController’, and this class was actually created (subclassing ViewController). The contents of this class is discussed next.

CorePlotExampleViewController.h

Now, CorePlotExampleViewController.h was slighty modified into:

What’s going on? Well, we import CorePlot-CocoaTouch.h, which gives us access to all Core Plot functionality. Also, we implement CPTPlotDataSource, why will become more clear later.

CorePlotExampleViewController.m

Next, we changed our view controller’s viewDidLoad:

What’s going on? First, CPTGraphHostingView* hostView is created. Alternatively you could (as described in other examples) do this via IB (Storyboard) and create an outlet and name it ‘hostView’.

Next, CPTGraph* graph is created and set as the hostViews ‘hosted graph’. Use the graph object to change axis and global layout. It can contain one or more plot spaces and actual plots (the lines, bars, dots, pies, etc). In this example, we take the graph’s default plot space and change its x and y ranges, this determines what part of the plot is shown to the user.

Finally, we create CPTScatterPlot* plot, the actual plot line itself. The X and Y values of the plot are not specified yet. Instead, we set its dataSource, that will provide the X and Y values upon the graphs’ request. To keep things simple, our CorePlotExampleViewController class will double as dataSource, but this could be a separate object.

To let our view controller double as datasource, we already let it implement the CPTPlotDataSource protocol (in the .h) file, and now only need to define two methods:

The method numberOfRecordsForPlot returns the number of plots. We simply set it to 9. The method numberForPlot is actually called twice (for an XYScatter plot at least, which is what we created) per data point (it took me a while to realize this). One time the X value should be returned, the other time the Y value. This is determined by fieldEnum.

The plot’s points are requested by their index, which in our case ranges from 0 to 8 (so 9 in total as we specified in numberOfRecordsForPlot). To get a nice quadratic plot without too much hassle we define x as ‘index – 4’ (but this could be something irregular as well, such as values returned from an array). The y value will then simply be x * x. The return value should be an NSNumber, so you’ll have to create this, for instance with numberFromInt, as in this case, or numberFromDouble (etc.).

Closing remarks

Well, that’s it really. You should now get a plot when you run your project. This is as simple as it gets, and from here you can make it as complicated as you want: change graph (theme, padding, border, background), change the axis (color, ticks, font, labels, location, multiple y axis), add more plots, change the plots line (thickness, color, symbols) add a legend (location, font), draw different plots (points, lines, bars, pies), etc. There’s is quite some information on this already, but perhaps I’ll do a followup post with the solutions we found for our graphing demands.

Why I think Android developers should rotate their phone often

In a well written app, when you rotate your phone, a lot happens on the background, but the user should not be aware of it. As a developer however, you should be aware, you should adapt to it, and you could use it in your advantage. Here’s why I think Android developers should rotate their phone often. Note: I’m linking to Stackoverflow questions here or there because the corresponding answers contain very practical and minimal code examples.

Adaptive layout

When a phone is rotated, the screen dimensions change. This may have a detrimental effect on the appearance of your app. It is very tempting as a developer to have only one testing device and adjust everything to that. Rotating your phone enables you to also get your landscape view good looking, and it also makes you aware of how important it is to expect the unexpected when it comes to screen size and resolutions, especially with Android, which runs on a wide range of devices (the HTC ChaCha for instance, has a screen that’s in landscape by default). You could decide you to ‘lock’ your screen to portrait. But when you’ve finished reading this blogpost, you might decide to do so only at the very last moment.

Activity restarting

When the phone rotates, the Android OS restarts the current activity. Something needs to be done to hide this event from the user! In a very simple app with views that have an id assigned, this will be done for you. But in many cases you will need to add code to make this work. Although you could catch the orientation event and prevent the restart of your activity altogether, the preferred method is to store the state of your activity in onSaveInstanceState. In the latter case, all important information about the current state of  your activity needs to be stored as key / value pairs (keys being strings). Stuff that does not fit the Bundle object used to be stored via onRetainNonConfigurationInstance, but this has been deprecated and fragments should be used instead.

Use it to your advantage

If you lock your app into portrait mode, you would not need to deal with the aforementioned issues. Well, at least not on rotation change. But your activity might be destroyed at any time. Especially when the user went away to do something else, but hopes to find your app in the same state as before when he / she gets back to it. This is one of the – in my opinion – nifty parts of Android’s ‘multi tasking’: stuff is kept in memory if possible, but destroyed and restored (in the original state, if the developer did his/her job) when needed in case of low memory. How to test for such events? That’s quite tricky, because it might take some effort to force the system to really destroy the activity. That’s when rotating your screen comes in handy: during this event, as far as saving and restoring instance state are concerned, the same happens as when your Activity gets destroyed.

Closing remarks

I’m currently using a Hackers News reader that actually did not implement the above. Every time I rotate my screen the app needs to reload stuff from the internet. When reviewing a developer, this is a simple way of testing whether he / she has at least some grasp of Android. If you care about your users and their experience, rotate your screen on a regular basis.

 

 

 

 

 

 

 

 

 

Hands on{x}

A Microsoft team from Israel released on{x} yesterday (as a beta version). This team researches location and activity awareness for Bing. On{x} enables the user to trigger tasks (reminders, website, sms) on events such as reaching a location, changing the way of movement (driving, walking, running) or simply on time.  For now, on{x} consists of an Android application with a website back end, connected by Facebook login. Let’s dive in.

Two layers of complexity

On{x} has basically two layers of complexity: ‘recipes’ enable non-technical users to quickly configure rules, while technical users can show their JavaScript skills. The interesting part is not yet up and running: having the latter group of users create ‘recipes’ that the first group can use.

Rules = Scripts

Basically, on{x} is a bunch of scripts, called ‘rules’ that can be turned on or off. Such a rule is simply JavaScript code that gets executed when the rule gets (re)loaded. JavaScript event handlers can be defined that are triggered when an event takes place somewhere in the future. An API (with documentation) is available for a range of triggers, and a range of actions. Scripts are written in the browser (Ajax.org Cloud9 Editor) and pushed to the phone upon save. There’s a logging system which is also accessible through the browser.

Play a song when reaching home

I decided to write a rule that starts playing a song when I get home. Pointless, but technically interesting. There’s a possibility to do this via geo-fencing: one defines an area after which triggers can be set to go off whenever the phone leaves or enters the defined area. This however requires GPS to be on, which is a potential energy-hog.

Trigger on Wifi

Instead I decide to try and detect if my home Wifi signal (SSID) gets into range. For this, the API is a bit vague but with help of the forums and some debugging I found:

This event is triggered when the phone does a Wifi scan. It is also possible to force a scan yourself:

The major deviation from the documentation here is that to get to the scanResults in JavaScript, toArray() has to be called. Basically, scanResults is a (wrapper around a) Java List object.

Play a song

Playing a song from the sdcard also was not very obvious. The good thing is that on{x} is only a very thin layer on top of Android and I finally found this to work:

Saving state

The last challenge was how to ‘remember’ that when we find the SSID of interest to be in range, it hasn’t been so before. For this, I used localStorage to store the state. State starts uninitialized. As soon as a scan has been conducted, state changes to ‘in-range’ or ‘out-of-range’. Only when switching from the latter to the first, the song will be played.

Closing thoughts

Before I paste my entire script (which I will submit for review by on{x}) below, first the final verdict on on{x}. Overall, it is a true beta: documentation is immature, the app kept crashing on me, logging has quite a lag, there’s very little ‘user friendly’ recipes at the moment, there’s a lot of ‘actions’ missing (changing volume, turning Wifi on / off) and finally the required Facebook login which seems to disappoint quite some users. However, there is already a lot of activity on the forums, and every webdeveloper who know his/her JavaScript can get started in no time. The recipe layer and the option for coders to publish their rules have a lot of potential as well.

At first it seems surprising Microsoft published this for Android. But in the end, this platform is the most flexible. The availability of an existing opensource JavaScript engine (Rhino) for Java may also have helped. Given the fact that on{x} is written so close to the Android architecture (WifiScan results, for instance, are android.net.wifi.ScanResult objects in a JavaScript jacket, device.applications.launchViewer is a very thin wrapper around Androids View intent, etc), I truly doubt if this will be available for other platforms any time soon, not without a lot of effort and abstraction of the API.

If a small startup had come up with this beta, they would have gotten my approval. But from Microsoft, I would have expected something more. They should have started with a closed beta: now the Play market is flooded with 1 and 2 star reviews, and they already used up most of the attention wave.

If you’re not a hacker, I’d stay away from on{x} for now. If you are, you might want to try it out, and hang around to see if this does becomes mature at some point and you can quickly jump in to seize whatever opportunity there may be.

Final script

As promised, the entire script on a platter:

 

 

 

 

 

 

 

Hardware for iOS development

What hardware does one need for iOS development? This question is especially important if you, like me, don’t owe a Mac, or yours is old. Does one need a Mac? If you are willing to go non-native, you can use PhoneGap Build, which is web based and bakes your app from HTML/CSS/JS. But in that case you still need access to a Mac at some point to configure your certificate. Also, PhoneGap limits you, and PhoneGap Build does so even more. Others may succeed installing OS X on their normal PC, but that is a forest of tutorials and trial-and-errors that I explored long enough to know I don’t want to waste any more of my time on it.

Xcode 3 or 4

For classical Xcode development, you will need a Mac. What kind? Summing up a lot of threads all over the internet, a 2Gb / ~2+ Ghz / Snow Leopard setup used to be enough. But is that still the case? Because that applies to Xcode 3. To target iOS 5 (more about that below), one needs Xcode 4.2 or higher, to target iOS 5.1, Xcode 4.3 or higher is required. The first is available for OSX Snow Leopard. The latter however requires at least Lion and even testing on an iPhone with iOS 5.1 requires either Xcode 4.3 or a workaround (link). The general consensus is for Lion + Xcode 4, a minimum of 4Gb RAM is reasonable.

iOS 4 / 5 / 5.1

Since Xcode 3 doesn’t support iOS 5, and Xcode 4 and especially 4.3 require a better Mac, the iOS 4 vs 5 debate is important. In contrast to Android, iOS has a much faster acceptance rate. Looking at data from various websites, the market penetration of iOS 5 is already 80 to 92 percent, with iOS 5.1 in the lead. So from that point of view, targeting iOS 5 makes sense. Even though iOS 4 apps will generally run on iOS 5, you’d be missing out on various new features. This gap will grow even bigger over time.

So. Mac. It. Is.

So, every single path leads towards buying a Mac. One could get away with buying a second hand machine, but if you do so being a company, you’ll need to buy such an old Mac to overcome the VAT difference (plus the risk of second hand buying) that there is a big chance that the specs will be or very soon become outdated.

So my current position on this matter is to buy a new Mac. This boils down to buying (at least) either a MacBook Pro 13″ / 4Gb / 2.4Ghz i5 / Lion for € 965 (excl VAT) or a Mac Mini 4 Gb / 2.3 Ghz i5 / Lion for € 590. Add € 50 for a KVM switch and a Mac-VGA cable for the latter and you’re in business, albeit not mobile.

 

 

Destroyed by RAID1. Backup plan 2.

So here’s how I lost my data because of RAID1. Apparently, my mobo (Asus P5W DH Deluxe) has an on board RAID1 option, which is on by default. Simply plugging in the SATA cable from a data-containing harddrive into the second port (my OS disk was on the first port) was enough to criple my data ad nausea: my OS disk was mirrored over my data on the bit level.

Needless to say, my heart aged a bit that day. And my mind got a crash course on data recovery and the NTFS filesystem, but to no avail. All I found back on the crippled data drive were files that were actually on my OS disk.

How could I miss that? This mobo is in my computer for ages. Or did I know, but forgot, somewhere along the way? And why, for crying out loud, didn’t I do any backups for the last three years? I can only conclude this is all my own fault.

On a higher level, I made two conclusions:

  1. Because of how my data is organized, not all was lost: my data is spread over Gmail, Gdrive, BitBucket, DropBox, NAS server, Laptop, remote VPS servers and multiple disks.
  2. Because of how my data is organized, a good backup regime is quite hard to come up with and to implement.

Instead of looking back, I decided to look forward, and I came up with a list of properties that I would like to see in a backup system:

  • Multiple locations to backup from, and multiple locations to backup to
  • Fine grained control on the folder and file level of:
    • What to backup
    • Where to backup to, and the amount of replicates to maintain
    • The amount of versions to maintain and what to do upon deletion
  • Backup over a network connection
  • Moving files or folders should not result in data duplication on the backup site
  • Deletion of files should not immediately result in deletion from the backup
  • Deletion of files should not result in infinite retention inside the backup
  • Dashboard with stats to monitor free space, backup site availability and deletions
  • Work on Linux, and preferably also with Windows
  • Recovery should be possible based on individual or multiple backup sites

Does this exist? I don’t know. I didn’t look too far, but instead started to work on this myself. It’s probably stupid, but only time can tell. Here’s a few key aspects:

  • Written in PHP. I know PHP and I benefit from learning more about it. Furthermore it is available from the command line and via webserver (good for the Dashboard part) on all my systems.
  • Sqlite as database on both ends. Especially for the backup location this means that the metadata is in exactly the same folder as the file data and can be copied around easily when needed.
  • Backup files are named md4_filesize.bck. This means that files that are exactly the same will only be stored once. I hope md4 + filesize is enough to reasonably prevent collisions. SHA would be better, but slower. Perhaps MurMur3?
  • Data transfer is over a socket, uses hashing to check for data integrity. Files already on the backup location are skipped (based on information on both ends) or (when inclomplete) continued.
  • Using regexp for fine grained control over what to backup, where, versions, etc

File transfer is already working nicely. Backing up a folder structure with fine grained control over what to backup and how many versions to maintain is working as well. There’s a bit of logging up and running and a very rudimentary dashboard as well. A simple recover script has also been tested (but this doesn’t work over the network yet). Scaling this to multiple locations and repositories, and handling file deletions (I’m planning to report deletions and keep them for X days, which may be checked and corrected manually via the dashboard) is on the todo list. Up until now there’s no security. I could add some, or simply tunnel over SSH for remote connections.

The nice thing is that this backup solution could help me backing up my documents, my pictures, my mp3s (which would be fine on a different disk inside the same building) and also database dumps and logs from my webservers.

I think I want to also include BitBucket, Gmail and Gdrive exports, as well as DropBox files. Or can we trust those parties with our digital lives?

 

Sharpening the Saw

Steven Covey‘s 7th habit of highly effective people is called ‘sharpening the saw’. He states that in order to be – and more importantly – to keep ‘effective’, it is important to invest in yourself. At first, it may seem a good idea to spent all your hours on production (for a developer, that would be coding), leaving little time for social activities, physical exercise or studying. But as time goes by, your physique will not be able to keep up, you lag behind on technological level and friendships evaporate. All these effects will become very detrimental for your productivity at some point in time. And on the long run, your net productivity will actually be less than when you would have reserved time to sharpen the saw.

Enter me running on my new shoes. Aren’t they hideous? But they were cheap (on sale) and they feel so much better than my regular shoes. I hadn’t expected this much of a difference, but they truly absorb a lot of the impact that would otherwise end up in my knee joint.

I’ve picked up running again after my sister ran a marathon. Not that I ever expect to ever be able to do that but it reminded me how your head clears while running (all you can think about is panting and aching..) and the rush afterwards. And as a bonus, it sharpens the saw, keeping me fit for the next coding spree.

Possible TMI: I run 4km now in 28 minutes. This boils down to a pace of 7 minutes per km, which is not super fast, but fast enough for me. I use runkeeper to keep me informed of my progress during the run. This is really different from how it used to be (I timed my runs but since I always took another route I had no way of determining how fast I actually ran). Yet another example of how mobile devices are totally awesome.

PhoneGap templating with Mustache & jQuery

A useful habit to pick up somewhere along the road is templating. In this case, we will use the mustache.js templating engine, which can be used in a HTML & JavaScript setting. Templates consist of html containing {{variablename}} tags. The proposed solution (which uses jQuery) loads each templates from a separate file, which I find useful to keep things modular as well as orderly. Loading templates from separate files makes specifically sense in a PhoneGap setup, but works just as well in the browser, although in the latter case one often can reside to server-sided templating (or at least merge the separate template files into the main html file to prevent excessive http requests).

Note for testing: my Chrome browser threw some cross domain issues when testing this locally through the file:/// uri. For me (Ubuntu) the solution was to close Chrome and restart from the command line: google-chrome –new-window –allow-file-access-from-files file:///..

Templates

Let us start with a very simple file under tpl/test.tpl (relative to your index.html file) (create the tpl folder yourself).

Next, download mustache.j, throw it into your project and link it from your html file:

JavaScript – Loading the templates

Here’s a function you can use to load all your templates into (string) variables at once.

We load each file through an ajax request, and force jQuery to interpret the result as (plain) html. Note that we enforce synchronous loading: the script blocks until the template is loaded before continuing with the next template. This way we don’t have to worry about semi-loaded templates when the script continues.

This function could be called from jQuery’s document ready function:

JavaScript – Rendering a template

To render a mustache template, simply feed it with a structure that contains data fields that correspond to the {{fieldname}} tags in the template. These will then simply be replaced. The resulting html can be used to build your page, for instance with jQuery’s .html() command:

Wrap up and further ideas

From here on, you can make more, and more complex templates. You might also want to introduce better error handling / recovering. And when you have many templates, it may be an option to do partial loading or even loading on demand.