We’re closing down.

  • Time constraints (duplication of data cross platform)
  • Financial Constraints (paying for non-utilized services)

Those 2 reasons are why I am not going to continue paying for this blog at $10.00/month. It’s not much, but it’s not needed.

If you want to keep up with development, check out my Github.

GNUCash. Its awesome.

Now I am no financial guru when it comes to matters pertaining to financial solutions (or am I?), but I’m quite content with GNUCash. GNUCash, for me, is a simple management solution for finances that extends past the capacity of using Excel as your budget tracker (not that you should do that).

The reason I enjoy using GNUCash (apart from it being freely available) is how flexible but powerful it is. In under 5 minutes I was capable of reconciling my accounts and tracking the $18.2 discrepancy between my bank statement and my account balance in GNUCash. The reporting functionality is immaculate (more on that later) and the availability to utilize multiple accounts, with multiple journals is amazing.

Disclaimer: I love this software.

GNUCash uses double-entry accounting and transactions. In basic terms, credit an expense account, and debit a saving account. This basic concept allows for in-depth analysis of the current cash-flow of an account. For example, this is how I personally setup an opening balance against an account:As you can see, under the account “Westpac Expense Account” I am depositing $100.00, which has a double entry to Asset: <Expense Acount>. Now this is where double-entry accounting becomes exciting. In the accounting world, “credits must always equal debits”, meaning that whenever an expense is incurred, there must be an entry under the Expense Account as a credit (we’re adding to the value of this expense account) and a debit to the cashing account responsible for the expense. The following example is how you would credit your Gas Expense, and in-turn reduce the total balance of your bank account:

In a transaction report, we would see the Auto: Gas credited to bring a balance of $100.00 and the Asset: <Account> debited $100.00. On a short side-note, when I studied accounting and finance (quite some time ago) the easiest method for double-entry accounting was PALER:

P – Properitary Equity (Owner Equity)

A – Assets

L – Liabilities

E – Equity

R – Revenue

Which, depending on the transactional nature of the account, would mean the following:




However I have deviated from the topic, slightly – I am not here to teach accounting, I’m here to tell you why I love this program. Apart from simple entries, you are able to reconcile accounts easily; that is, to identify discrepancies and “why credits aren’t equaling debits”. For the example above mentioned, it is able to query the transactions and provide an informative window:

You cannot tell me that’s not a huge function of keeping track of finances!  So again, this financial application not only allows you to easily keep on top of income and expenses, but allows you to get a clear view of your expenses, income and discrepancies (similar to MYOB), all for free.

The last amazing feature I can say about the application is that the reporting mechanisms are very easy to create (should you want to make your own queries) along with several existing reports. For example, the cash flow report will record all “Main Accounts” pertaining to income and expenses, as for this example:

So give it a go, and I am quite confident you’ll thank me later!

After a year of Windows, we’re back!

So, tonight I’ve decided it’s time to take a break from gaming (yes, that’s the reason why this blog has been dead for months) and get back into a bit of coding, system automation (via the use of coding, obvs) and Linux. Yes, that’s right; it’s back, the superior Operating System/Platform is back.

Apart from Discord and Slack now having native installers for the OS, one other thing I’ve really been impressed with; how easy it is to now install EnpassIO Password Manager! Literally, it’s easier than Windows.

The following 4 lines will install EnPass on your PC (and yes, you can chain these commands if you’re lazy):

$ sudo -i
$ echo "deb http://repo.sinew.in/ stable main" > \
$ apt-get update
$ apt-get install enpass
$ exit

How easy is that? Here I was dreading needing to convert my app database into a KeepPass or whatever inferior password manager existed out there!

PGP, a little “what is it”.

In my post pertaining to my backup policies, we touched on how I utilize Symantec PGP Disk Encryption to store my data in personal vaults, preventing access without an authenticated password. I had a few people ask me to expand on what I meant by using a PGP zip(s), and why I use this solution.

To follow up, I will discuss (very briefly) what PGP is, and why I opted to this solution over others (such as AxCrypt or EncFS).

What is PGP?

PGP stands for “Pretty Good Privacy”, and is a protocol relating to data encryption (provided by the software). The usage of PGP varies, as it is capable of multiple usage cases, but primarily is used for data signature (‘file signatures‘).

The following is an exert of the definition (from Wikipedia) of PGP:

PGP is used for signing, encrypting, and decrypting texts, e-mails, files, directories, and whole disk partitions and to increase the security of e-mail communications.

So, when thinking of the process implemented within PGP, there is both a Public and Private key for file signatures. For example, email signing is one of the main usages of PGP.

The following explains the process for implementing a PGP signed email to a recipient. PGPAssebly.gif


  1. The sender signs their plain-text data with their public key;
  2. The application then encrypts the plaintext data with the cipher provided;
  3. The end-recipient then decrypts the file with their private key


Why did you select PGP?

Apart from the fact that I have keys to applications using PGP, as far as I am aware, there are little or no security issues with PGP.

Whilst this may not necessarily be true (keyloggers and social-engineering are two methods to bypass this), the following assumption was made:

Q: Can’t you break PGP by trying all of the possible keys?

A: This is one of the first questions that people ask when they are first introduced to cryptography. They do not understand the size of the problem. For the IDEA encryption scheme, a 128 bit key is required.

Any one of the 2128 possible combinations would be legal as a key, and only that one key would successfully decrypt the message. Let’s say that you had developed a special purpose chip that could try a billion keys per second. This is far beyond anything that could really be developed today.

Let’s also say that you could afford to throw a billion such chips at the problem at the same time. It would still require over 10,000,000,000,000 years to try all of the possible 128 bit keys. That is something like a thousand times the age of the known universe!

While the speed of computers continues to increase and their cost decrease at a very rapid pace, it will probably never get to the point that IDEA could be broken by the brute force attack.

A further report can be found here.

Furthermore, there is nothing preventing me including other security measures to encapsulate the data. Symantec’s PGP program simply encrypts a portion of a hard drive, with a private key. Once you want to access this portion of the drive (and further the data stored in the “PGP Vault”) you must supply the private key to decrypt the data; more sub-zips, or other encryption methods can be used inside the “virtual mounted drive”.

OpenPGP? PGP? GPG? What are they?

These protocols or “solutions” are “forks” of the original PGP method; OpenPGP being the open-sourced alternative. Each version of PGP has it’s own advantages, disadvantages and may have their own security threats associated against them.

To summarize; how do you use this?

To clarify, PGP is not the only tool I rely on; security, and the layers of complexity are vast.

Think of the directories like so:

/ root
/ root/PGPdisk/

The files (not folders) under the mounted directories are individually encrpyted with AxCrypt.

There are 3 main reasons for this;

  1. If there is an issue with Symantec and PGP disks (‘back door’), each file is encrypted individually;
  2. If the PGP disk is kept mounted and physical access granted, they must know another password and;
  3. If a virus attempts to edit the file content when the drive mounted, it need to decrypt beforehand (it would probably corrupt the data honestly, but still!)

…oh, and it is really simple to maintain, and transfer between drives!


My New Backup Policy

It’s been a while since I shared with you why backups and me do not mesh, and I even went as far as asking cloud services to encrypt my data for me. Today (well, it’s actually 1am in the morning, so this morning) I am going to share with you a slightly improved versioning of my backup methods.

This time, I have built in some complexity to my backup regime, and made it more robust. To summarize in the past, I have used applications such as Cobian, and EaseUS; Cobian’s VSS service kept failing me, and EaseUS was corrupting the data when writing, so I had 2 copies of corrupt data and for months was unaware. With my newest solution I aim to tackle this with the following:

  1. 4 jobs to individually copy files (explained later);
  2. 1 job to copy the entire 4 backup files to a NAS and;
  3. 1 job to copy the local files to another NAS drive.

With these jobs, the end files are being written in a native .zip extension, and an email notification with the job status has been sent out to confirm the outcome. Of course, this isn’t foolproof; but I am hoping that the 3 duplicated copies of the files, and the email notifications are enough to save me; heck, the program also checks the integrity of the files! So, I am going to detail this for you wonderful people.

Just warning you – this is going to be a lot of “this is the button I clicked, this is the screen” options below, but that’s life.

Having an escape medium

Before beginning, I needed to come up with a solution that would “get me out of trouble should this PC die”. Of course, the obvious solution to holding upwards of 40GB would be a NAS, or Network Attached Storage. Utilising an independent media allows me to segregate the damage from my PC to the NAS (Virus, power surge and water damage being the main causes).

The initial copy job should be local (as in, an internal drive) for the following reasons:

  1. You’re less likely to have corrupt backups local as opposed to network generated files;
  2. If the external media dies, you should have another copy of the data, and;
  3. You’ve got immediate access for recovery should you require it.

So with that, I re-purposed my Orico 2 Bay NAS (which has been awesome, if you’re after a cheap NAS bay- this does not have full “NAS” functionality) to be the medium for receiving the data. Because I encrypt my files, I needed to have a copy of AxCrypt and Symantec PGP Encryption in each directory I planned on using the backups in. This was my “escape medium”.

Under 4 directories (2x NAS Drives, local SSD and recovery USB) I housed the installers for AxCrypt and Symantec PGP. Note that because my backup solution uses .zip, I do not need it to extract anything.

Backup Software

In this example (and after over 25 hours of research) I have come to the conclusion that iPerius is one of the best tools out there for data backup. I so much enjoyed this software, I paid the $315.00USD figure to unlock all features (and “support development”).

I will go into example (step by step, with pictures) to identify how I performed my backup policies.

Selecting Directories

The first process for creating your backup files is to set your “working” directories; the folders you wish to backup. Because I use Symantec PGP Disks, I want to only backup the .pgd files, and not the entire directory.

With iPerius, I would apply the filter to only include the PGD file extension, as follows:


However, I also wanted to do a job per PDG file, so I added the other 3 PGD files to the exclusion list, as follows:







Why is this important?

The reason why this is so important is it demonstrates the robust ability for iPerius to not only include all file extensions, but also to remove specific files you do not want.

Destination Information

The next step is to state where you want the data stored. In this screen there are 3 important options I wanted to identify:

  1. How many copies of the same file did I want this job to keep;
  2. If I wanted the files compressed, and with a password and;
  3. What I wanted the file to be named.

As you can see, I want to keep ~10 copies (full, not incremental) of the backup, and I have named the file based on the following parameters:


Logging The Data

Next, we want to enable it to be autonomous; if you need to initiate the backup, you’ll probably forget.

I went and set my days, and time, and asked the program to create a log file per backup:


Then, I configured the program to send me an email upon completion of set job:

Email Settings

The settings for the email portion.

I want to pay special attention to the email it sends you. Not only can you customize the header, recipients and body, but the format is easy to understand, and all this is wrapped up in a nicely viewed status bar, to tell you where the job is at:


The encryption portion of the data

In the past I’ve explained why it’s important to secure personal privacy, and demonstrated how basic encryption works,  but never how I personally achieve this. I personally (a few years ago) purchased 15 copies of a PGP program for a customer, who never went through with the deal – so now I have copies!

Symantec PGP Disks are how I store (and encrypt) data. Similar to this paper on Whole Disk Encryption, my pretty good privacy application has a screen where it lists PGP disks I have created:


These files can be located in Explorer, as such:


To access the files (mounting them into memory) you simply need to enter the passphrase to the disk, with the following pop-up:


The reason why I use this tool? It’s reliable, and rather secure – if my understanding on how PGP works is accurate, anyway.


So there you have it, a simple backup policy, that has enough redundancy to keep me out of trouble! Now if only Gmail labels were useful… 😉

Bash, making things easier.

Following on from my post last night about WGet and YouTube-DL, we’ve learned how to enable Bash on Windows 10. Now, this is an extremely useful thing to do, because it empowers you to use commands that are not native on Windows (or as a Linux fanboy would say, M$).

So, just so you all get a better understanding of the improved functionality of bash, we’re going to make some comparisons and examples.

I’m not going into detail – there is far too much to cover.

Network Monitoring – netstat.

On a windows box, to see current usage per process, the easiest method is to run:

netstat -a -b

Which will return a string similar to this:


Rather simple to do, and allows you to see what process is responsible for what traffic, and what protocol it is using.

GNU + Linux supports netstat, but has a complete different syntax for the commands. To see connections with the process, simply run the following:

root@DESKTOP-3O8E0L8:/home/nanky# netstat -p

Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name


It returns a little more intuitive data (in my opinion). For example, we can then dictate by interface the data we wish to see by adding -i to the command:

netstat -i

This is an advantage over Windows. But let us get to the killer feature:

netstat -a -v -w -r

The following flags are used:

 -a, --all
 Show both listening and non-listening sockets. With the --interfaces option, show interfaces that are not up

--verbose , -v
 Tell the user what is going on by being verbose. Especially print some useful information about unconfigured address families.

--wide , -W
 Do not truncate IP addresses by using output as wide as needed. This is optional for now to not break existing scripts.

Thus allowing this command to return more valuable information, depending on the situation. However, there is a better tool: bmon and nethogs.


Start by issuing the following command:

sudo apt-get install bmon


Once installed, you should always look at the man page:

man bmon


Using bmon allows you to view the usage and statistics per interface, such as:

BMON Capture

Of course, there are other tools to conquer these tasks out there – I would strongly suggest you read this post outlining other sysadm tools available to you.


Automation, crontabs.

As opposed to the clunky Windows Task Scheduler, Linux uses Cron Jobs to execute tasks.

You’ll need to have bash running for CronJobs to work on Windows.

Pretty self explanatory, create a script or command you want to execute, and add it to the scheduler. Here it the default example provided to you:

# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/

You guessed it, the stars represent time:

# m h dom mon dow command

The tar -zcf portion is the code it executes. Pretty self explanatory. 7:30am, every week for example:

crontab -e
30 7 * * 1 /my/command/to/execute/yo.sh

You get this. Easy stuff.


Task Maintenance – task.

Okay so this one’s not so critical, I just love this application. TaskWarrior. Tasks is a super simple yet super powerful CLI driven task manager.

sudo apt-get install task


There we go, you’ve installed it. Let’s add our first task:

nanky@DESKTOP-3O8E0L8:~$ task P:H  due:31 project personal add edit this css

Now let’s view our task:

 ID Age P Due Description Urg
 2  26s H  4w edit this css

Pretty simple method to view the task at hand. Now we want to view the task with the ID ‘2’:

task id 2

Which will return the following:

Name Value
ID 2
Description change task 1
Status Pending
Entered 2017-09-27 23:50:56 (1min)
Last modified 2017-09-27 23:50:56 (1min)
UUID 34c4cf80-a857-4123-a463-4c4bcc44b591
Urgency 6
Priority H

UDA priority.H 1 * 6 = 6

You can sync your tasks across multiple devices, too! Just view their usage examples, and you’ll get the feel for how complex you can make the tool.

Lastly, text editing.

I cannot live without GNU Nano. Yes, you could use Vim but the simplicity of Nano amazes me.

For example, let’s edit a file and close it, all without needed to locate it, open, manually save and confirm dialogs:

nano /mnt/c/path/to/file/yo.txt

It is literally that simple, and you can interact with files stored on Windows natively.

That’s it.

You pretty much get the picture; CLI > GUI.


Just read:

  1. 20 Command Line Tools to Monitor Linux Performance
  2. Best Linux Command-Line Tools For Network Engineers
  3. Top 5 Linux Utilities for Network Engineers






Downloading Web Content with WGet and YouTube-DL

In this post I am going to cover the process behind using WGet and YouTube-DL to obtain media from hosted websites.

But Michael, isn’t this illegal?

Depends, did you read this? I’m simply showing you the methodology behind something – it’s your choice how to use this.

Basic Install…for Linux.

YouTube-DL and WGet are native to Linux (using the package managers) you can simply perform the following:

sudo apt-get install wget
sudo apt-get install youtube-dl

Installing this on a Windows Client.

But for all us unfortunate users stuck on Windows, how do we achieve this? There are two main methods in which I will demonstrate.

Enabling Bash for Windows 10

If you’re using Windows 10, you can enable “Linux Subsystem” for Windows. It’s a real hard process, paste the following into an administrative PowerShell console:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

…and reboot.

Once you load bash (literally, bash.exe) you can install the client the above mentioned way.

Getting the stand alone programs to run on Windows

Not on Windows 10? Cannot really blame you. So, let’s go and manually get these packages.

First, download wget, and put in a working directory.

Same process for YouTube-DL.

“Using the awesomeness that is, these things”

It’s 12am, and I’ve had 2 hours sleep. The titles don’t matter right now.

Let’s open an administrative PowerShell, and change to the directory.

Directory: C:\bin

Mode   Last Write Time                Length       Name
—- ————- —— —-
-a—-   27/09/2017 12:27 AM     3481920    wget.exe
-a—-   27/09/2017 12:28 AM    7803406    youtube-dl.exe

…and now you look at the help file and figure out how to use the programs yourselves…? No? Okay.

Let’s start with YouTube-DL, and their documentation. This will give you all the switches you can use in conjunction with the program.

For those who are just lazy, save the following as a PowerShell script, and execute it to perform a basic download. I did not error checking or improvement to this:

$YP = "Enter your working directory"
$Vid = "Enter URL to vid"
function getfiles {
 $URL1 = "https://eternallybored.org/misc/wget/current/wget.exe"
 $URL2 = "https://yt-dl.org/downloads/2017.09.24/youtube-dl.exe"
 $output = "$YP"
 Start-BitsTransfer -Source $url1 -Destination $output
 Start-BitsTransfer -Source $url2 -Destination $output

function downloadmystuff {
 cd $YP
 youtube-dl $vid --ignore-errors --geo-bypass --yes-playlist --write-description 
--write-all-thumbnails --console-title --print-traffic --all-formats 


Basically, this will download the file mentioned by your “$Vid” function, with the following parameters:

  • –ignore-errors
  • –geo-bypass
  • –yes-playlist
  • –write-description
  • –write-all-thumbnails
  • –console-title
  • –print-traffic
  • –all-formats

Pretty straight forward, easy to understand. Oh, and did you know they support Instagram?


Cool. So let’s use this in conjunction with WGet. I want to download my home page:

nanky@DESKTOP-3O8E0L8:~$ cd /mnt/c/bin/ && wget www.michaelnancarrow.com

I want to download just the images from here:

nanky@DESKTOP-3O8E0L8:/mnt/c/bin$ wget -nd -E -H -k -K -p -A jpeg,png,jpg https://imgur.com/gallery/PATH

  • -nd
  • -E
  • -H
  • -k and -K
  • -p and
  • -A

Can all be found out on the manual page.

So there you go, another very basic “how to” document that could have been answered more succinctly by spending 5 minutes on Google. Literally.

Let’s chat about how you chat.

In the modern day, there are a plethora of instant-messaging applications at disposal to allow you to communicate, send photos and videos, and share your location with family and friends. The same is true about the Government housing all this data. So, I thought I’d add some context as to what you can do to improve security when using instant messaging applications.

The best thing about factual topics on the internet are, you can quote and use them to literally reiterate the same fact on your own post, and it’s for a good cause; keeping things factual.

Why do I need an encrypted chat program?

You do not necessarily need an encrypted chat program, however it is important to those whom wish to have privacy with those they speak to. Any message (ASCII), photo, video or voice recording transmitted via Facebook, Snapchat, Instagram or even conventional SMS can (and most likely) is stored in some centralised database, building a profile of you.

Do you remember people complaining Facebook knows what they look at on their phones?  As an example, this post goes into detail about data you are sharing with Facebook, even though you are not directly opting to do so (read the terms of service next time):

  • Videos you’ve watched
  • Comments you’ve liked
  • Websites you’ve visited
  • Articles and websites you’ve commented on
  • Surveys you’ve filled out
  • Companies you like
  • People you’ve been tagged with
  • People you frequently hang out with
  • Friends you’ve requested
  • Friends you denied
  • Friends you’ve un-friended
  • How often you are online
  • Apps you Admin/created
  • Pages you admin/created
  • Your current mood
  • Device you’ve accessed the Internet from
  • Exact Geo-location (longitude, altitude, latitude, time/date stamp)
  • TV, Film, Concert you are currently watching
  • Book or publication you are currently reading
  • Audio you are currently listening too
  • Drink you are currently drinking
  • Food you are currently eating
  • Activities you participate in
  • Advertising you interact with
  • Profiles you interact with most
  • Locations you access Facebook
  • Locations you access web properties connected to Facebook
  • Level of online engagement
  • When you changed jobs
  • How long you stayed in a job
  • Credit card details
  • IP Address
  • Apps you’ve downloaded
  • Games you’ve played
  • Pages/Businesses you’ve un-liked (when)

The main reasoning behind the highlighted items are that in unison, anyone with access to this data can isolate your location, recent visits and what devices you have on you. Not only does this potentially make you vulnerable to tracking from people, but opens you up for someone to steal your identity (yes, these are very dramatic repercussions but still valid).

  • Device you’ve accessed the Internet from

Services (this is not limited to Facebook) should not have the ability to catalogue the devices tied to an account.

With this ability, they are able to (stemming to the next point) always have a geo-locked location of the device (and assumed person) at their disposal.

  • Exact Geo-location (longitude, altitude, latitude, time/date stamp)

Exact. What the actual Foxtrot Unicorn Charlie Kite?  Services knowing where I am, at whatever time is a clear abuse of power.

Using this information, patterns of travel and location can allow you to be tracked. Using services such as Facebook to allow geo-location tracking is absurd in my opinion.

  • IP Address

If you know my public IP via a NAT you can easily sniff and track web usage of certain people; again, what the actual Foxtrot Unicorn Charlie Kite?

  • Websites you’ve visited

Because Internet Censorship is what everyone wants, right?  Having ISP logging Metadata and services knowing website you visit is a sheer breach of personal privacy.

  • Book or publication you are currently reading

Personally, I read a lot of politically incorrect publications (not that I am some crazy person) and I’d like to keep that private; Facebook knowing I am “Googling” terrorist attacks and researching them should not be recorded without my implicit confirmation.

Now of course, it is impossible not to use services that record all this data (Personally Google knows everything about me) but there are valid techniques to mitigate the collection and aggregation of this information.

What are the things I should look for in set program?

This is a broad and rather opinion-based topic point. Discussion relating open sourced, closed, cross-platform and protocols are open for debate.

Personally (and I am not a security expert), there need to be 3 options available in an application to make it secure:

  • Open source, peer reviewed;
  • De-facto encryption standards and;
  • Non identifying sign-up requirements.

Open source, peer reviewed

This topic has a lot of scrutiny about it. One point being vulnerabilities are shared with anyone reading the code, but at the same time, greater set of eyes allow this to be patched faster.

My opinion is summarised by this statement:

Do I choose Safe Number One that’s advertised to have half-inch steel walls, an inch-thick door, six locking bolts, and is tested by an independent agency to confirm that the contents will survive for two hours in a fire? Or, do I choose for Safe Number Two, a safe the vendor simple says to trust, because the design details of the safe are a trade secret? It could be Safe Number Two is made of plywood and thin sheet metal. Or, it could be that it is stronger than Safe Number One, but the point is I have no idea.

Of course, the battle enraged the internet  and everyone has their own opinions I personally believe that if the code is peer-reviewed, back doors and holes to the software are far less likely than those of a propriety company that will share your data for the right payment.

De-facto encryption standards

When you use a program designed to keep security in mind, you do not want to rely on some newly-created protocol that takes a thousand builds to be stable. Implementing either old (usually broken) or unstable protocols is a flaw in itself when trying to enact secure messaging.

As with all my posts, the technical content will be highly referenced as I am (as I’ve said) not a security expert.

When looking at encryption and cryptography, there are a number of standards you can enact.

Extensible Messaging and Presence Protocol (XMPP)

is a communications protocol for message-oriented middleware based on XML (Extensible Markup Language).[1] It enables the near-real-time exchange of structured yet extensible data between any two or more network entities.[2] Originally named Jabber,[3] the protocol was developed by the Jabber open-source community in 1999 for near real-timeinstant messaging (IM), presence information, and contact list maintenance. Designed to be extensible, the protocol has been used also for publish-subscribe systems, signalling for VoIP, video, file transfergaming, the Internet of Things (IoT) applications such as the smart grid, and social networking services.

Think of XMPP as the back-end transportation method and not necessarily the encryption methodology.  XMPP is good as it has a open-standard, and is scalable across platforms.

According to XMPP:

Secure — any XMPP server may be isolated from the public network (e.g., on a company intranet) and robust security using SASL and TLS has been built into the core XMPP specifications. In addition, the XMPP developer community is actively working on end-to-end encryption to raise the security bar even further.

An XMPP Server is considered secure when the following (minimum) items are present:

  • The server is running with a server certificate
  • The server is configured to not allow any cleartext communications – S2S and C2S
  • The server supports XEP-198

Note that unless you have clear access to the code running on the server to validate the above, you assume the XMPP portion of the application is unsecure.

Off-the-Record Messaging (OTR)

is a cryptographic protocol that provides encryption for instant messaging conversations. OTR uses a combination of AESsymmetric-key algorithm with 128 bits key length, the Diffie–Hellman key exchange with 1536 bits group size, and the SHA-1 hash function. In addition to authentication and encryption, OTR provides forward secrecyand malleable encryption.

OTR is a rather complex protocol. Before commencing an encrypted data exchange, both parties must do an unauthenticated Diffie-Hellman (D-H) key exchange to set up an encrypted channel, and then do mutual authentication inside that channel.

Let’s use Bob and Alice as the example here. Bob must initiate the AKE (Authenticated Key Exchange) as follows:


  1. Picks a random value r (128 bits)
  2. Picks a random value x (at least 320 bits)
  3. Sends Alice AESr(gx), HASH(gx)


  1. Picks a random value y (at least 320 bits)
  2. Sends Bob gy


  1. Verifies that Alice’s gy is a legal value (2 <= gy <= modulus-2)
  2. Computes s = (gy)x
  3. Computes two AES keys c, c’ and four MAC keys m1, m1′, m2, m2′ by hashing s in various ways
  4. Picks keyidB, a serial number for his D-H key gx
  5. Computes MB = MACm1(gx, gy, pubB, keyidB)
  6. Computes XB = pubB, keyidB, sigB(MB)
  7. Sends Alice r, AESc(XB), MACm2(AESc(XB))


  1. Uses r to decrypt the value of gx sent earlier
  2. Verifies that HASH(gx) matches the value sent earlier
  3. Verifies that Bob’s gx is a legal value (2 <= gx <= modulus-2)
  4. Computes s = (gx)y (note that this will be the same as the value of s Bob calculated)
  5. Computes two AES keys c, c’ and four MAC keys m1, m1′, m2, m2′ by hashing s in various ways (the same as Bob)
  6. Uses m2 to verify MACm2(AESc(XB))
  7. Uses c to decrypt AESc(XB) to obtain XB = pubB, keyidB, sigB(MB)
  8. Computes MB = MACm1(gx, gy, pubB, keyidB)
  9. Uses pubB to verify sigB(MB)
  10. Picks keyidA, a serial number for her D-H key gy
  11. Computes MA = MACm1′(gy, gx, pubA, keyidA)
  12. Computes XA = pubA, keyidA, sigA(MA)
  13. Sends Bob AESc’(XA), MACm2′(AESc’(XA))


  1. Uses m2′ to verify MACm2′(AESc’(XA))
  2. Uses c’ to decrypt AESc’(XA) to obtain XA = pubA, keyidA, sigA(MA)
  3. Computes MA = MACm1′(gy, gx, pubA, keyidA)
  4. Uses pubA to verify sigA(MA)

If all of the verifications succeeded, Alice and Bob now know each other’s Diffie-Hellman public keys, and share the value s. Alice is assured that s is known by someone with access to the private key corresponding to pubB, and similarly for Bob.

Once this has been configured, you can go about Exchanging data.

Suppose Alice has a message (msg) to send to Bob:


  • Picks the most recent of her own D-H encryption keys that Bob has acknowledged receiving (by using it in a Data Message, or failing that, in the AKE). Let keyA by that key, and let keyidA be its serial number.
  • If the above key is Alice’s most recent key, she generates a new D-H key (next_dh), to get the serial number keyidA+1.
  • Picks the most recent of Bob’s D-H encryption keys that she has received from him (either in a Data Message or in the AKE). Let keyB by that key, and let keyidB be its serial number.
  • Uses Diffie-Hellman to compute a shared secret from the two keys keyA and keyB, and generates the sending AES key, ek, and the sending MAC key, mk, as detailed below.
  • Collects any old MAC keys that were used in previous messages, but will never again be used (because their associated D-H keys are no longer the most recent ones) into a list, oldmackeys.
  • Picks a value of the counter, ctr, so that the triple (keyA, keyB, ctr) is never the same for more than one Data Message Alice sends to Bob.
  • Computes TA = (keyidA, keyidB, next_dh, ctr, AES-CTRek,ctr(msg))
  • Sends Bob TA, MACmk(TA), oldmackeys


  • Uses Diffie-Hellman to compute a shared secret from the two keys labelled by keyidA and keyidB, and generates the receiving AES key, ek, and the receiving MAC key, mk, as detailed below. (These will be the same as the keys Alice generated, above.)
  • Uses mk to verify MACmk(TA).
  • Uses ek and ctr to decrypt AES-CTRek,ctr(msg).

Do you like advanced mathematics?

So if it’s just a pre-defined function, surely it can be impersonated?

Socialist Millionaires’ Protocol (SMP)

is one in which two millionaires want to determine if their wealth is equal without disclosing any information about their riches to each other. It is a variant of the Millionaire’s Problem[2][3] whereby two millionaires wish to compare their riches to determine who has the most wealth without disclosing any information about their riches to each other.

Basically, let’s check to see who you are without disclosing information. Another fun example of maths. Assuming that Alice begins the exchange:


  1. Picks random exponents a2 and a3
  2. Sends Bob g2a = g1a2 and g3a = g1a3


  1. Picks random exponents b2 and b3
  2. Computes g2b = g1b2 and g3b = g1b3
  3. Computes g2 = g2ab2 and g3 = g3ab3
  4. Picks random exponent r
  5. Computes Pb = g3r and Qb = g1r g2y
  6. Sends Alice g2b, g3b, Pb and Qb


  1. Computes g2 = g2ba2 and g3 = g3ba3
  2. Picks random exponent s
  3. Computes Pa = g3s and Qa = g1s g2x
  4. Computes Ra = (Qa / Qba3
  5. Sends Bob Pa, Qa and Ra


  1. Computes Rb = (Qa / Qbb3
  2. Computes Rab = Rab3
  3. Checks whether Rab == (Pa / Pb)
  4. Sends Alice Rb


  1. Computes Rab = Rba3
  2. Checks whether Rab == (Pa / Pb)
  • If everything is done correctly, then Rab should hold the value of (Pa / Pb) times (g2a3b3)(x – y), which means that the test at the end of the protocol will only succeed if x == y. Further, since g2a3b3 is a random number not known to any party, if x is not equal to y, no other information is revealed.

Pretty neat documentation on OTR. Props to them.

Diffie–Hellman key exchange (D–H) – Elaborated for OTR Implementation.

 is a method of securely exchanging cryptographic keys over a public channel

Or as I prefer to explain it:

Diffie helman is a mathematical algorithm to exchange a shared secret between two parties. This shared secret can be used to encrypt messages between these two parties.

This “methodology” (its a protocol) is used to “salt” a passphrase (or key). The following implantation on how it works:

The simplest and the original implementation of the protocol uses the multiplicative group of integers modulo p, where p is prime, and g is a primitive root modulo p. These two values are chosen in this way to ensure that the resulting shared secret can take on any value from 1 to p–1. Here is an example of the protocol, with non-secret values in blue, and secret values in red.

  1. Alice and Bob agree to use a modulus p = 23 and base g = 5 (which is a primitive root modulo 23).
  2. Alice chooses a secret integer a = 6, then sends Bob A = ga mod p
    • A = 56 mod 23 = 8
  3. Bob chooses a secret integer b = 15, then sends Alice B = gb mod p
    • B = 515 mod 23 = 19
  4. Alice computes s = Ba mod p
    • s = 196 mod 23 = 2
  5. Bob computes s = Ab mod p
    • s = 815 mod 23 = 2
  6. Alice and Bob now share a secret (the number 2).

Both Alice and Bob have arrived at the same value s, because, under mod p,

{\displaystyle A^{b}{\bmod {\,}}p=g^{ab}{\bmod {\,}}p=g^{ba}{\bmod {\,}}p=B^{a}{\bmod {\,}}p}

More specifically,

{\displaystyle (g^{a}{\bmod {\,}}p)^{b}{\bmod {\,}}p=(g^{b}{\bmod {\,}}p)^{a}{\bmod {\,}}p}

The following values are then stated:

  • g = public (prime) base, known to Alice, Bob, and Eve. g = 5
  • p = public (prime) modulus, known to Alice, Bob, and Eve. p = 23
  • a = Alice’s private key, known only to Alice. a = 6
  • b = Bob’s private key known only to Bob. b = 15
  • A = Alice’s public key, known to Alice, Bob, and Eve. A = ga mod p = 8
  • B = Bob’s public key, known to Alice, Bob, and Eve. B = gb mod p = 19


This is literally all available on Wikipedia by the way.

It is imperative to understand that Diffie-Hellman is just a function to compute a shared key, not a full protocol. To actually use it, you need to design a protocol on top of it; OTR actually signs the DH key with its “long term key”.

Perfect Forward Secrecy (PFS)

Again, this is bundled in the OTR implementation. In the simplest form:

PFS is a property of secure communication protocols in which compromise of long-term keys does not compromise past session keys. Forward secrecy protects past sessions against future compromises of secret keys or passwords. If forward secrecy is used, encrypted communications and sessions recorded in the past cannot be retrieved and decrypted should long-term secret keys or passwords be compromised in the future, even if the adversary actively interfered.

To get a better understanding of this, it can be stated that:

A public-key system has the property of forward secrecy if it generates one random secret key per session to complete a key agreement, without using a deterministic algorithm. This means that the compromise of one message cannot compromise others as well, and there is no one secret value whose acquisition would compromise multiple messages.

There are many iterations of this, the current notable method being Double Ratchet.

Basic Understanding = Done.

Now that you’ve got a basic understanding of why it is important to want to encrypt your data, and an example of how this is accomplished, let’s look at your options.

Again this post is about picking an app to secure messaging. It is not intended to go into depth on how encryption works etc.

So, how do we know what to use, and what not to use? Well you could use CryptoCat or an application listed here.

The answer is: whatever application meets your suited requirements. There is no 100% answer to this question.

Personally I use Signal for a few reasons:

  1. It’s easy to tell people to install;
  2. Implements OTR Ratchet;
  3. Curve25519 improvement to D-H and;
  4. It’s just really easy to use.


Make sure you check out my blog post about Fighting For Internet Freedom.

Come visit me on Stack Exchange:

Profile for Michael Nancarrow on Stack Exchange


Errors? Typos? More facts needed?

EncFS; easy, fast and reliable?

Implementing a secure file-system in current-day computing is an imperative function, especially with Crypto attacks on the rise. My personal method to ensuring data integrity on a Linux Box is EncFS (you may prefer GEncFSM).

EncFS is a Free (LGPLFUSE-based cryptographic filesystem. It transparently encrypts files, using an arbitrary directory as storage for the encrypted files.

EncFS uses an encrypted and un-encrypted directory. For example, I could use the following assumption: my Dropbox directory is a mirror of my /home directory, and acts as the encrypted mirror for EncFS.


Default EncFS Screen

Any data stored in your unencrypted directory, is encrypted using your defined passphrase, in another directory; mirrored data.

Installation of EncFS

Whilst you can download the GitHub project and follow the installation guide, if you are on Ubuntu or another similar flavour (Kubuntu or Lubuntu as an example) you can simply run the following command:

sudo apt-get -y install encfs

If you prefer GEncFSM, then run the following:

sudo add-apt-repository ppa:gencfsm/ppa
sudo apt-get update
sudo apt-get install gnome-encfs-manager

Usage of EncFS

If you are intending to use EncFS as the command-line option (I usually just default to the UI) then I would suggest inspecting the man page:

 encfs - mounts or creates an encrypted virtual filesystem

 encfs [--version] [-s] [-f] [-v|--verbose] [-i MINUTES|--idle=MINUTES]
 [--extpass=program] [-S|--stdinpass] [--anykey] [--forcedecode]
 [-d|--fuse-debug] [--public] [--no-default-flags] [--ondemand]
 [--delaymount] [--reverse] [--standard] [-o FUSE_OPTION] rootdir
 mountPoint [-- [Fuse Mount Options]]

If you are not too particular with how you want to configure the system, go ahead and perform:

mkdir -p ~/encrypted
mkdir -p ~/decrypted

Then mount them for EncFS (you can later see where they mount using the mount command):

encfs ~/encrypted ~/decrypted

You will be prompted to select the mode, and to create a password for the encrypted paths.

Usage of GEncFSM

Using the GUI is probably a lot more manageable here. To create a stash, simply select the plus icon, configure your path and enter a password:


Creating New Stash


Then go ahead and mount the stash:


Mounting Stash

Understanding EncFS

When a file is made in the directory “Private” (in our case this is the “un-encrypted” path), a mirror file is created in your “.Private” directory, with multiple rounds of salt using your provided “key” (the passphrase is used to hash the name and content):


Private and .Private

Therefore, if we attempt to look at the encrypted file, it would not present any readable data:


File Value

Of course, if we read the .encfs6.xml  file, we will see the KeyData value:


Therefore, it is worth noting that:

  • If someone knows your encodedKeyData value, and has a copy of your data, it can be compromised
  • The EncFS is only as secure as the passphrase you assign it – there is no Brute Force lockout procedures inplace and;
  • Physical access to the files (by mean of PC or RDP) should still be limited.


Therefore, we assume EncFS is a reliable, safe and fast method to encrypt data.

Learning PowerShell with Michael.

At the present, I am refining my PowerShell usage, updating my scripts to make the code more readable and slowly learning new methods to do things easier, and faster. I’ve been on several forums relating to PowerShell and am quite active (you may have found this blog from there?), and thought I’d make my own post.

Whilst I’ll attempt to be as thorough as possible (we all know I do not vet my own documents), this shall not be an all-encompassing guide/post on PowerShell. The post will briefly cover:

  1. What is Windows Management Framework 5.0?
  2. IDE(s) and their benefits
  3. Using Variables
  4. Using Functions

So, let’s get into it.

What is Windows Management Framework 5.0?

The technical answer is:

Windows Management Framework (WMF) is the delivery mechanism that provides a consistent management interface across the various flavors of Windows and Windows Server.


In easier terminology, it is a distinct sub-set of Windows tools designed for automation, maintaining and auditing Windows PC(s), and primarily, Windows Servers.

Think of WMF as a toolbox, that houses tools:

In Windows, .NET Framework and PowerShell are implemented through the Enable/Disable Features option.

Of course, you should be able to just use DISM to enable the feature as well:

Dism /online /enable-feature /featurename:NetFx3 /All /Source:F:\sources\sxs /LimitAccess
  •  Where F:\sources\sxs is your installation directory SXS folder.

Note the following availability:

Operating System Version WMF 5.1 WMF 5.0 WMF 4.0 WMF 3.0 WMF 2.0
Windows Server 2016 Ships in-box
Windows 10 Ships in-box Ships in-box
Windows Server 2012 R2 Yes Yes Ships in-box
Windows 8.1 Yes Yes Ships in-box
Windows Server 2012 Yes Yes Yes Ships in-box
Windows 8 Ships in-box

IDE(s) and their benefits

Integrated Development Environments, or “IDE”, are different to the Integrated Scripting Environment, “ISE”, slightly. For example, the following quote depicts IDE:

An IDE normally consists of a source code editorbuild automation tools and a debugger. Most modern IDEs have intelligent code completion. Some IDEs, such as NetBeans and Eclipse, contain a compilerinterpreter, or both.

ISE is rather limited, as:

  • It is designed for PowerShell only (as far as I am aware);
  • There is no real debugger, just console output and;
  • It was designed for Microsofts Operating System Only (Can be run on Linux and OSX though)

Whilst I do not dislike the ISE for PowerShell, it’s not one I would suggest you use. Sure, it has all the cmdlets housed in a neat menu, depicting what category they fall under, but that’s it.

Personally, I would recommend Microsoft other IDE tool, Visual Studio. The same syntax highlighting, and autocomplete functions are readily available, it supports multiple languages, and has a community list of add-ons.


VS Code Syntax Highlighting

Benefits of using Visual Studio Code over ISE:

  • If you decide PowerShell is not for you, change your palette language!
  • Heaps of useful add-ons;
  • Open Source;
  • Fantastic Syntax Highlighting and auto-complete and;
  • Because it’s just better.


Using Variables

Variables are the second most powerful “function” (no pun(s) intended) in PowerShell, in my opinion. A variable is a string of data defined in a script that can be referenced later, to make the code shorter, cleaner and more consistent.

A variable can take a complex command, and make it easier to reference down the script. In the following example, I have set 4 variables for commands I wish to use:

$Name = "$env:USERNAME"
$PC = "hostname" 
$Date= "(Get-Date).ToString('dd-MM-yyyy')"
$Time = Get-Date -Format HH-mm-ss

To set a variable, you use the following Syntax:

$Name of the variable = "action to perform"

Which can be translated:

$YourName = Read-Host "What is your name"
Greetings, "$YourName"

Always remember to call a variable using the “$” symbol, and keep it in a quote for clean code.

Variables allow you to replicate a complex command easily, multiple times throughout a script. Editing the function of the variable is reflected each time the script calls the variable. Without variables, code would be a lot messier and could be much harder to debug – a wrong comma in a line incorrectly copied could break the entire script.

Some useful examples are:

Learning how to implement variables allows for scripts that are:

  • Smaller in size;
  • Smaller in code;
  • Generally more robust;
  • Easier to debug;
  • Generally easier to read (if shared)

Using Functions

Functions are perhaps the most useful feature of PowerShell. PowerShell functions, similar to variables, allow you to perform complex command(s), and reference them by a function. The syntax being:

function "name"() {
command 1
command 2
  • “name”() is the name of set function;
  • {} are the open and close of the function, placed at beginning and end and;
  • name() actually executes the function; it does not need to be called straight after the function.

Functions support variables that can be predefined in the PowerShell script. In the following example, the function “shutdownalldomainpcs()” is using 4 variables to execute a command:

$H = Read-Host "What is the IP Address of your Domain Controller?" 
$nH = "\\*" 
$-u = Read-Host "What's your domain admin username?" 
$-p = Read-Host "Enter Password"-AsSecureString 
$command = "psexec "$nH" "$-u" "$-p" shutdown -f -r -t 0" 

function shutdownalldomainpcs(){ 
psexec "-H" "$-u" "$-p" "command" 

Yes, there may be syntax errors or the command might not even work, I am simply demonstrating. Should totally try with domain admin rights however.

You can read a little more on variables and functions from Microsoft here:

function test ($x, $y) 
 $x * $y 

Enough functions and variables, let’s nest functions! Yeah, you heard me. Functions calling functions!

A simple example:

function 1() {
Write 1

function 2() {
Write 2

function 3() {


The third function executes Function 1 and 2. Handy little trick to allow you to perform multiple steps.

In the following example, I set multiple variables, and then use a “IF” switch to see if a directory is made, and write data to it:


Again pure example, code could not work 😉

Some useful links if you are interested:


Want to edit this post? Want to post your own content?

I am hoping for some additional writers on this blog. If you want to contribute, please use the comment function, and I will be in touch.