My New Backup Policy

It’s been a while since I shared with you why backups and me do not mesh, and I even went as far as asking cloud services to encrypt my data for me. Today (well, it’s actually 1am in the morning, so this morning) I am going to share with you a slightly improved versioning of my backup methods.

This time, I have built in some complexity to my backup regime, and made it more robust. To summarize in the past, I have used applications such as Cobian, and EaseUS; Cobian’s VSS service kept failing me, and EaseUS was corrupting the data when writing, so I had 2 copies of corrupt data and for months was unaware. With my newest solution I aim to tackle this with the following:

  1. 4 jobs to individually copy files (explained later);
  2. 1 job to copy the entire 4 backup files to a NAS and;
  3. 1 job to copy the local files to another NAS drive.

With these jobs, the end files are being written in a native .zip extension, and an email notification with the job status has been sent out to confirm the outcome. Of course, this isn’t foolproof; but I am hoping that the 3 duplicated copies of the files, and the email notifications are enough to save me; heck, the program also checks the integrity of the files! So, I am going to detail this for you wonderful people.

Just warning you – this is going to be a lot of “this is the button I clicked, this is the screen” options below, but that’s life.

Having an escape medium

Before beginning, I needed to come up with a solution that would “get me out of trouble should this PC die”. Of course, the obvious solution to holding upwards of 40GB would be a NAS, or Network Attached Storage. Utilising an independent media allows me to segregate the damage from my PC to the NAS (Virus, power surge and water damage being the main causes).

The initial copy job should be local (as in, an internal drive) for the following reasons:

  1. You’re less likely to have corrupt backups local as opposed to network generated files;
  2. If the external media dies, you should have another copy of the data, and;
  3. You’ve got immediate access for recovery should you require it.

So with that, I re-purposed my Orico 2 Bay NAS (which has been awesome, if you’re after a cheap NAS bay- this does not have full “NAS” functionality) to be the medium for receiving the data. Because I encrypt my files, I needed to have a copy of AxCrypt and Symantec PGP Encryption in each directory I planned on using the backups in. This was my “escape medium”.

Under 4 directories (2x NAS Drives, local SSD and recovery USB) I housed the installers for AxCrypt and Symantec PGP. Note that because my backup solution uses .zip, I do not need it to extract anything.

Backup Software

In this example (and after over 25 hours of research) I have come to the conclusion that iPerius is one of the best tools out there for data backup. I so much enjoyed this software, I paid the $315.00USD figure to unlock all features (and “support development”).

I will go into example (step by step, with pictures) to identify how I performed my backup policies.

Selecting Directories

The first process for creating your backup files is to set your “working” directories; the folders you wish to backup. Because I use Symantec PGP Disks, I want to only backup the .pgd files, and not the entire directory.

With iPerius, I would apply the filter to only include the PGD file extension, as follows:


However, I also wanted to do a job per PDG file, so I added the other 3 PGD files to the exclusion list, as follows:







Why is this important?

The reason why this is so important is it demonstrates the robust ability for iPerius to not only include all file extensions, but also to remove specific files you do not want.

Destination Information

The next step is to state where you want the data stored. In this screen there are 3 important options I wanted to identify:

  1. How many copies of the same file did I want this job to keep;
  2. If I wanted the files compressed, and with a password and;
  3. What I wanted the file to be named.

As you can see, I want to keep ~10 copies (full, not incremental) of the backup, and I have named the file based on the following parameters:


Logging The Data

Next, we want to enable it to be autonomous; if you need to initiate the backup, you’ll probably forget.

I went and set my days, and time, and asked the program to create a log file per backup:


Then, I configured the program to send me an email upon completion of set job:

Email Settings

The settings for the email portion.

I want to pay special attention to the email it sends you. Not only can you customize the header, recipients and body, but the format is easy to understand, and all this is wrapped up in a nicely viewed status bar, to tell you where the job is at:


The encryption portion of the data

In the past I’ve explained why it’s important to secure personal privacy, and demonstrated how basic encryption works,  but never how I personally achieve this. I personally (a few years ago) purchased 15 copies of a PGP program for a customer, who never went through with the deal – so now I have copies!

Symantec PGP Disks are how I store (and encrypt) data. Similar to this paper on Whole Disk Encryption, my pretty good privacy application has a screen where it lists PGP disks I have created:


These files can be located in Explorer, as such:


To access the files (mounting them into memory) you simply need to enter the passphrase to the disk, with the following pop-up:


The reason why I use this tool? It’s reliable, and rather secure – if my understanding on how PGP works is accurate, anyway.


So there you have it, a simple backup policy, that has enough redundancy to keep me out of trouble! Now if only Gmail labels were useful… 😉

Bash, making things easier.

Following on from my post last night about WGet and YouTube-DL, we’ve learned how to enable Bash on Windows 10. Now, this is an extremely useful thing to do, because it empowers you to use commands that are not native on Windows (or as a Linux fanboy would say, M$).

So, just so you all get a better understanding of the improved functionality of bash, we’re going to make some comparisons and examples.

I’m not going into detail – there is far too much to cover.

Network Monitoring – netstat.

On a windows box, to see current usage per process, the easiest method is to run:

netstat -a -b

Which will return a string similar to this:


Rather simple to do, and allows you to see what process is responsible for what traffic, and what protocol it is using.

GNU + Linux supports netstat, but has a complete different syntax for the commands. To see connections with the process, simply run the following:

root@DESKTOP-3O8E0L8:/home/nanky# netstat -p

Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name


It returns a little more intuitive data (in my opinion). For example, we can then dictate by interface the data we wish to see by adding -i to the command:

netstat -i

This is an advantage over Windows. But let us get to the killer feature:

netstat -a -v -w -r

The following flags are used:

 -a, --all
 Show both listening and non-listening sockets. With the --interfaces option, show interfaces that are not up

--verbose , -v
 Tell the user what is going on by being verbose. Especially print some useful information about unconfigured address families.

--wide , -W
 Do not truncate IP addresses by using output as wide as needed. This is optional for now to not break existing scripts.

Thus allowing this command to return more valuable information, depending on the situation. However, there is a better tool: bmon and nethogs.


Start by issuing the following command:

sudo apt-get install bmon


Once installed, you should always look at the man page:

man bmon


Using bmon allows you to view the usage and statistics per interface, such as:

BMON Capture

Of course, there are other tools to conquer these tasks out there – I would strongly suggest you read this post outlining other sysadm tools available to you.


Automation, crontabs.

As opposed to the clunky Windows Task Scheduler, Linux uses Cron Jobs to execute tasks.

You’ll need to have bash running for CronJobs to work on Windows.

Pretty self explanatory, create a script or command you want to execute, and add it to the scheduler. Here it the default example provided to you:

# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/

You guessed it, the stars represent time:

# m h dom mon dow command

The tar -zcf portion is the code it executes. Pretty self explanatory. 7:30am, every week for example:

crontab -e
30 7 * * 1 /my/command/to/execute/

You get this. Easy stuff.


Task Maintenance – task.

Okay so this one’s not so critical, I just love this application. TaskWarrior. Tasks is a super simple yet super powerful CLI driven task manager.

sudo apt-get install task


There we go, you’ve installed it. Let’s add our first task:

nanky@DESKTOP-3O8E0L8:~$ task P:H  due:31 project personal add edit this css

Now let’s view our task:

 ID Age P Due Description Urg
 2  26s H  4w edit this css

Pretty simple method to view the task at hand. Now we want to view the task with the ID ‘2’:

task id 2

Which will return the following:

Name Value
ID 2
Description change task 1
Status Pending
Entered 2017-09-27 23:50:56 (1min)
Last modified 2017-09-27 23:50:56 (1min)
UUID 34c4cf80-a857-4123-a463-4c4bcc44b591
Urgency 6
Priority H

UDA priority.H 1 * 6 = 6

You can sync your tasks across multiple devices, too! Just view their usage examples, and you’ll get the feel for how complex you can make the tool.

Lastly, text editing.

I cannot live without GNU Nano. Yes, you could use Vim but the simplicity of Nano amazes me.

For example, let’s edit a file and close it, all without needed to locate it, open, manually save and confirm dialogs:

nano /mnt/c/path/to/file/yo.txt

It is literally that simple, and you can interact with files stored on Windows natively.

That’s it.

You pretty much get the picture; CLI > GUI.


Just read:

  1. 20 Command Line Tools to Monitor Linux Performance
  2. Best Linux Command-Line Tools For Network Engineers
  3. Top 5 Linux Utilities for Network Engineers






Let’s chat about how you chat.

In the modern day, there are a plethora of instant-messaging applications at disposal to allow you to communicate, send photos and videos, and share your location with family and friends. The same is true about the Government housing all this data. So, I thought I’d add some context as to what you can do to improve security when using instant messaging applications.

The best thing about factual topics on the internet are, you can quote and use them to literally reiterate the same fact on your own post, and it’s for a good cause; keeping things factual.

Why do I need an encrypted chat program?

You do not necessarily need an encrypted chat program, however it is important to those whom wish to have privacy with those they speak to. Any message (ASCII), photo, video or voice recording transmitted via Facebook, Snapchat, Instagram or even conventional SMS can (and most likely) is stored in some centralised database, building a profile of you.

Do you remember people complaining Facebook knows what they look at on their phones?  As an example, this post goes into detail about data you are sharing with Facebook, even though you are not directly opting to do so (read the terms of service next time):

  • Videos you’ve watched
  • Comments you’ve liked
  • Websites you’ve visited
  • Articles and websites you’ve commented on
  • Surveys you’ve filled out
  • Companies you like
  • People you’ve been tagged with
  • People you frequently hang out with
  • Friends you’ve requested
  • Friends you denied
  • Friends you’ve un-friended
  • How often you are online
  • Apps you Admin/created
  • Pages you admin/created
  • Your current mood
  • Device you’ve accessed the Internet from
  • Exact Geo-location (longitude, altitude, latitude, time/date stamp)
  • TV, Film, Concert you are currently watching
  • Book or publication you are currently reading
  • Audio you are currently listening too
  • Drink you are currently drinking
  • Food you are currently eating
  • Activities you participate in
  • Advertising you interact with
  • Profiles you interact with most
  • Locations you access Facebook
  • Locations you access web properties connected to Facebook
  • Level of online engagement
  • When you changed jobs
  • How long you stayed in a job
  • Credit card details
  • IP Address
  • Apps you’ve downloaded
  • Games you’ve played
  • Pages/Businesses you’ve un-liked (when)

The main reasoning behind the highlighted items are that in unison, anyone with access to this data can isolate your location, recent visits and what devices you have on you. Not only does this potentially make you vulnerable to tracking from people, but opens you up for someone to steal your identity (yes, these are very dramatic repercussions but still valid).

  • Device you’ve accessed the Internet from

Services (this is not limited to Facebook) should not have the ability to catalogue the devices tied to an account.

With this ability, they are able to (stemming to the next point) always have a geo-locked location of the device (and assumed person) at their disposal.

  • Exact Geo-location (longitude, altitude, latitude, time/date stamp)

Exact. What the actual Foxtrot Unicorn Charlie Kite?  Services knowing where I am, at whatever time is a clear abuse of power.

Using this information, patterns of travel and location can allow you to be tracked. Using services such as Facebook to allow geo-location tracking is absurd in my opinion.

  • IP Address

If you know my public IP via a NAT you can easily sniff and track web usage of certain people; again, what the actual Foxtrot Unicorn Charlie Kite?

  • Websites you’ve visited

Because Internet Censorship is what everyone wants, right?  Having ISP logging Metadata and services knowing website you visit is a sheer breach of personal privacy.

  • Book or publication you are currently reading

Personally, I read a lot of politically incorrect publications (not that I am some crazy person) and I’d like to keep that private; Facebook knowing I am “Googling” terrorist attacks and researching them should not be recorded without my implicit confirmation.

Now of course, it is impossible not to use services that record all this data (Personally Google knows everything about me) but there are valid techniques to mitigate the collection and aggregation of this information.

What are the things I should look for in set program?

This is a broad and rather opinion-based topic point. Discussion relating open sourced, closed, cross-platform and protocols are open for debate.

Personally (and I am not a security expert), there need to be 3 options available in an application to make it secure:

  • Open source, peer reviewed;
  • De-facto encryption standards and;
  • Non identifying sign-up requirements.

Open source, peer reviewed

This topic has a lot of scrutiny about it. One point being vulnerabilities are shared with anyone reading the code, but at the same time, greater set of eyes allow this to be patched faster.

My opinion is summarised by this statement:

Do I choose Safe Number One that’s advertised to have half-inch steel walls, an inch-thick door, six locking bolts, and is tested by an independent agency to confirm that the contents will survive for two hours in a fire? Or, do I choose for Safe Number Two, a safe the vendor simple says to trust, because the design details of the safe are a trade secret? It could be Safe Number Two is made of plywood and thin sheet metal. Or, it could be that it is stronger than Safe Number One, but the point is I have no idea.

Of course, the battle enraged the internet  and everyone has their own opinions I personally believe that if the code is peer-reviewed, back doors and holes to the software are far less likely than those of a propriety company that will share your data for the right payment.

De-facto encryption standards

When you use a program designed to keep security in mind, you do not want to rely on some newly-created protocol that takes a thousand builds to be stable. Implementing either old (usually broken) or unstable protocols is a flaw in itself when trying to enact secure messaging.

As with all my posts, the technical content will be highly referenced as I am (as I’ve said) not a security expert.

When looking at encryption and cryptography, there are a number of standards you can enact.

Extensible Messaging and Presence Protocol (XMPP)

is a communications protocol for message-oriented middleware based on XML (Extensible Markup Language).[1] It enables the near-real-time exchange of structured yet extensible data between any two or more network entities.[2] Originally named Jabber,[3] the protocol was developed by the Jabber open-source community in 1999 for near real-timeinstant messaging (IM), presence information, and contact list maintenance. Designed to be extensible, the protocol has been used also for publish-subscribe systems, signalling for VoIP, video, file transfergaming, the Internet of Things (IoT) applications such as the smart grid, and social networking services.

Think of XMPP as the back-end transportation method and not necessarily the encryption methodology.  XMPP is good as it has a open-standard, and is scalable across platforms.

According to XMPP:

Secure — any XMPP server may be isolated from the public network (e.g., on a company intranet) and robust security using SASL and TLS has been built into the core XMPP specifications. In addition, the XMPP developer community is actively working on end-to-end encryption to raise the security bar even further.

An XMPP Server is considered secure when the following (minimum) items are present:

  • The server is running with a server certificate
  • The server is configured to not allow any cleartext communications – S2S and C2S
  • The server supports XEP-198

Note that unless you have clear access to the code running on the server to validate the above, you assume the XMPP portion of the application is unsecure.

Off-the-Record Messaging (OTR)

is a cryptographic protocol that provides encryption for instant messaging conversations. OTR uses a combination of AESsymmetric-key algorithm with 128 bits key length, the Diffie–Hellman key exchange with 1536 bits group size, and the SHA-1 hash function. In addition to authentication and encryption, OTR provides forward secrecyand malleable encryption.

OTR is a rather complex protocol. Before commencing an encrypted data exchange, both parties must do an unauthenticated Diffie-Hellman (D-H) key exchange to set up an encrypted channel, and then do mutual authentication inside that channel.

Let’s use Bob and Alice as the example here. Bob must initiate the AKE (Authenticated Key Exchange) as follows:


  1. Picks a random value r (128 bits)
  2. Picks a random value x (at least 320 bits)
  3. Sends Alice AESr(gx), HASH(gx)


  1. Picks a random value y (at least 320 bits)
  2. Sends Bob gy


  1. Verifies that Alice’s gy is a legal value (2 <= gy <= modulus-2)
  2. Computes s = (gy)x
  3. Computes two AES keys c, c’ and four MAC keys m1, m1′, m2, m2′ by hashing s in various ways
  4. Picks keyidB, a serial number for his D-H key gx
  5. Computes MB = MACm1(gx, gy, pubB, keyidB)
  6. Computes XB = pubB, keyidB, sigB(MB)
  7. Sends Alice r, AESc(XB), MACm2(AESc(XB))


  1. Uses r to decrypt the value of gx sent earlier
  2. Verifies that HASH(gx) matches the value sent earlier
  3. Verifies that Bob’s gx is a legal value (2 <= gx <= modulus-2)
  4. Computes s = (gx)y (note that this will be the same as the value of s Bob calculated)
  5. Computes two AES keys c, c’ and four MAC keys m1, m1′, m2, m2′ by hashing s in various ways (the same as Bob)
  6. Uses m2 to verify MACm2(AESc(XB))
  7. Uses c to decrypt AESc(XB) to obtain XB = pubB, keyidB, sigB(MB)
  8. Computes MB = MACm1(gx, gy, pubB, keyidB)
  9. Uses pubB to verify sigB(MB)
  10. Picks keyidA, a serial number for her D-H key gy
  11. Computes MA = MACm1′(gy, gx, pubA, keyidA)
  12. Computes XA = pubA, keyidA, sigA(MA)
  13. Sends Bob AESc’(XA), MACm2′(AESc’(XA))


  1. Uses m2′ to verify MACm2′(AESc’(XA))
  2. Uses c’ to decrypt AESc’(XA) to obtain XA = pubA, keyidA, sigA(MA)
  3. Computes MA = MACm1′(gy, gx, pubA, keyidA)
  4. Uses pubA to verify sigA(MA)

If all of the verifications succeeded, Alice and Bob now know each other’s Diffie-Hellman public keys, and share the value s. Alice is assured that s is known by someone with access to the private key corresponding to pubB, and similarly for Bob.

Once this has been configured, you can go about Exchanging data.

Suppose Alice has a message (msg) to send to Bob:


  • Picks the most recent of her own D-H encryption keys that Bob has acknowledged receiving (by using it in a Data Message, or failing that, in the AKE). Let keyA by that key, and let keyidA be its serial number.
  • If the above key is Alice’s most recent key, she generates a new D-H key (next_dh), to get the serial number keyidA+1.
  • Picks the most recent of Bob’s D-H encryption keys that she has received from him (either in a Data Message or in the AKE). Let keyB by that key, and let keyidB be its serial number.
  • Uses Diffie-Hellman to compute a shared secret from the two keys keyA and keyB, and generates the sending AES key, ek, and the sending MAC key, mk, as detailed below.
  • Collects any old MAC keys that were used in previous messages, but will never again be used (because their associated D-H keys are no longer the most recent ones) into a list, oldmackeys.
  • Picks a value of the counter, ctr, so that the triple (keyA, keyB, ctr) is never the same for more than one Data Message Alice sends to Bob.
  • Computes TA = (keyidA, keyidB, next_dh, ctr, AES-CTRek,ctr(msg))
  • Sends Bob TA, MACmk(TA), oldmackeys


  • Uses Diffie-Hellman to compute a shared secret from the two keys labelled by keyidA and keyidB, and generates the receiving AES key, ek, and the receiving MAC key, mk, as detailed below. (These will be the same as the keys Alice generated, above.)
  • Uses mk to verify MACmk(TA).
  • Uses ek and ctr to decrypt AES-CTRek,ctr(msg).

Do you like advanced mathematics?

So if it’s just a pre-defined function, surely it can be impersonated?

Socialist Millionaires’ Protocol (SMP)

is one in which two millionaires want to determine if their wealth is equal without disclosing any information about their riches to each other. It is a variant of the Millionaire’s Problem[2][3] whereby two millionaires wish to compare their riches to determine who has the most wealth without disclosing any information about their riches to each other.

Basically, let’s check to see who you are without disclosing information. Another fun example of maths. Assuming that Alice begins the exchange:


  1. Picks random exponents a2 and a3
  2. Sends Bob g2a = g1a2 and g3a = g1a3


  1. Picks random exponents b2 and b3
  2. Computes g2b = g1b2 and g3b = g1b3
  3. Computes g2 = g2ab2 and g3 = g3ab3
  4. Picks random exponent r
  5. Computes Pb = g3r and Qb = g1r g2y
  6. Sends Alice g2b, g3b, Pb and Qb


  1. Computes g2 = g2ba2 and g3 = g3ba3
  2. Picks random exponent s
  3. Computes Pa = g3s and Qa = g1s g2x
  4. Computes Ra = (Qa / Qba3
  5. Sends Bob Pa, Qa and Ra


  1. Computes Rb = (Qa / Qbb3
  2. Computes Rab = Rab3
  3. Checks whether Rab == (Pa / Pb)
  4. Sends Alice Rb


  1. Computes Rab = Rba3
  2. Checks whether Rab == (Pa / Pb)
  • If everything is done correctly, then Rab should hold the value of (Pa / Pb) times (g2a3b3)(x – y), which means that the test at the end of the protocol will only succeed if x == y. Further, since g2a3b3 is a random number not known to any party, if x is not equal to y, no other information is revealed.

Pretty neat documentation on OTR. Props to them.

Diffie–Hellman key exchange (D–H) – Elaborated for OTR Implementation.

 is a method of securely exchanging cryptographic keys over a public channel

Or as I prefer to explain it:

Diffie helman is a mathematical algorithm to exchange a shared secret between two parties. This shared secret can be used to encrypt messages between these two parties.

This “methodology” (its a protocol) is used to “salt” a passphrase (or key). The following implantation on how it works:

The simplest and the original implementation of the protocol uses the multiplicative group of integers modulo p, where p is prime, and g is a primitive root modulo p. These two values are chosen in this way to ensure that the resulting shared secret can take on any value from 1 to p–1. Here is an example of the protocol, with non-secret values in blue, and secret values in red.

  1. Alice and Bob agree to use a modulus p = 23 and base g = 5 (which is a primitive root modulo 23).
  2. Alice chooses a secret integer a = 6, then sends Bob A = ga mod p
    • A = 56 mod 23 = 8
  3. Bob chooses a secret integer b = 15, then sends Alice B = gb mod p
    • B = 515 mod 23 = 19
  4. Alice computes s = Ba mod p
    • s = 196 mod 23 = 2
  5. Bob computes s = Ab mod p
    • s = 815 mod 23 = 2
  6. Alice and Bob now share a secret (the number 2).

Both Alice and Bob have arrived at the same value s, because, under mod p,

{\displaystyle A^{b}{\bmod {\,}}p=g^{ab}{\bmod {\,}}p=g^{ba}{\bmod {\,}}p=B^{a}{\bmod {\,}}p}

More specifically,

{\displaystyle (g^{a}{\bmod {\,}}p)^{b}{\bmod {\,}}p=(g^{b}{\bmod {\,}}p)^{a}{\bmod {\,}}p}

The following values are then stated:

  • g = public (prime) base, known to Alice, Bob, and Eve. g = 5
  • p = public (prime) modulus, known to Alice, Bob, and Eve. p = 23
  • a = Alice’s private key, known only to Alice. a = 6
  • b = Bob’s private key known only to Bob. b = 15
  • A = Alice’s public key, known to Alice, Bob, and Eve. A = ga mod p = 8
  • B = Bob’s public key, known to Alice, Bob, and Eve. B = gb mod p = 19


This is literally all available on Wikipedia by the way.

It is imperative to understand that Diffie-Hellman is just a function to compute a shared key, not a full protocol. To actually use it, you need to design a protocol on top of it; OTR actually signs the DH key with its “long term key”.

Perfect Forward Secrecy (PFS)

Again, this is bundled in the OTR implementation. In the simplest form:

PFS is a property of secure communication protocols in which compromise of long-term keys does not compromise past session keys. Forward secrecy protects past sessions against future compromises of secret keys or passwords. If forward secrecy is used, encrypted communications and sessions recorded in the past cannot be retrieved and decrypted should long-term secret keys or passwords be compromised in the future, even if the adversary actively interfered.

To get a better understanding of this, it can be stated that:

A public-key system has the property of forward secrecy if it generates one random secret key per session to complete a key agreement, without using a deterministic algorithm. This means that the compromise of one message cannot compromise others as well, and there is no one secret value whose acquisition would compromise multiple messages.

There are many iterations of this, the current notable method being Double Ratchet.

Basic Understanding = Done.

Now that you’ve got a basic understanding of why it is important to want to encrypt your data, and an example of how this is accomplished, let’s look at your options.

Again this post is about picking an app to secure messaging. It is not intended to go into depth on how encryption works etc.

So, how do we know what to use, and what not to use? Well you could use CryptoCat or an application listed here.

The answer is: whatever application meets your suited requirements. There is no 100% answer to this question.

Personally I use Signal for a few reasons:

  1. It’s easy to tell people to install;
  2. Implements OTR Ratchet;
  3. Curve25519 improvement to D-H and;
  4. It’s just really easy to use.


Make sure you check out my blog post about Fighting For Internet Freedom.

Come visit me on Stack Exchange:

Profile for Michael Nancarrow on Stack Exchange


Errors? Typos? More facts needed?

EncFS; easy, fast and reliable?

Implementing a secure file-system in current-day computing is an imperative function, especially with Crypto attacks on the rise. My personal method to ensuring data integrity on a Linux Box is EncFS (you may prefer GEncFSM).

EncFS is a Free (LGPLFUSE-based cryptographic filesystem. It transparently encrypts files, using an arbitrary directory as storage for the encrypted files.

EncFS uses an encrypted and un-encrypted directory. For example, I could use the following assumption: my Dropbox directory is a mirror of my /home directory, and acts as the encrypted mirror for EncFS.


Default EncFS Screen

Any data stored in your unencrypted directory, is encrypted using your defined passphrase, in another directory; mirrored data.

Installation of EncFS

Whilst you can download the GitHub project and follow the installation guide, if you are on Ubuntu or another similar flavour (Kubuntu or Lubuntu as an example) you can simply run the following command:

sudo apt-get -y install encfs

If you prefer GEncFSM, then run the following:

sudo add-apt-repository ppa:gencfsm/ppa
sudo apt-get update
sudo apt-get install gnome-encfs-manager

Usage of EncFS

If you are intending to use EncFS as the command-line option (I usually just default to the UI) then I would suggest inspecting the man page:

 encfs - mounts or creates an encrypted virtual filesystem

 encfs [--version] [-s] [-f] [-v|--verbose] [-i MINUTES|--idle=MINUTES]
 [--extpass=program] [-S|--stdinpass] [--anykey] [--forcedecode]
 [-d|--fuse-debug] [--public] [--no-default-flags] [--ondemand]
 [--delaymount] [--reverse] [--standard] [-o FUSE_OPTION] rootdir
 mountPoint [-- [Fuse Mount Options]]

If you are not too particular with how you want to configure the system, go ahead and perform:

mkdir -p ~/encrypted
mkdir -p ~/decrypted

Then mount them for EncFS (you can later see where they mount using the mount command):

encfs ~/encrypted ~/decrypted

You will be prompted to select the mode, and to create a password for the encrypted paths.

Usage of GEncFSM

Using the GUI is probably a lot more manageable here. To create a stash, simply select the plus icon, configure your path and enter a password:


Creating New Stash


Then go ahead and mount the stash:


Mounting Stash

Understanding EncFS

When a file is made in the directory “Private” (in our case this is the “un-encrypted” path), a mirror file is created in your “.Private” directory, with multiple rounds of salt using your provided “key” (the passphrase is used to hash the name and content):


Private and .Private

Therefore, if we attempt to look at the encrypted file, it would not present any readable data:


File Value

Of course, if we read the .encfs6.xml  file, we will see the KeyData value:


Therefore, it is worth noting that:

  • If someone knows your encodedKeyData value, and has a copy of your data, it can be compromised
  • The EncFS is only as secure as the passphrase you assign it – there is no Brute Force lockout procedures inplace and;
  • Physical access to the files (by mean of PC or RDP) should still be limited.


Therefore, we assume EncFS is a reliable, safe and fast method to encrypt data.

Learning PowerShell with Michael.

At the present, I am refining my PowerShell usage, updating my scripts to make the code more readable and slowly learning new methods to do things easier, and faster. I’ve been on several forums relating to PowerShell and am quite active (you may have found this blog from there?), and thought I’d make my own post.

Whilst I’ll attempt to be as thorough as possible (we all know I do not vet my own documents), this shall not be an all-encompassing guide/post on PowerShell. The post will briefly cover:

  1. What is Windows Management Framework 5.0?
  2. IDE(s) and their benefits
  3. Using Variables
  4. Using Functions

So, let’s get into it.

What is Windows Management Framework 5.0?

The technical answer is:

Windows Management Framework (WMF) is the delivery mechanism that provides a consistent management interface across the various flavors of Windows and Windows Server.


In easier terminology, it is a distinct sub-set of Windows tools designed for automation, maintaining and auditing Windows PC(s), and primarily, Windows Servers.

Think of WMF as a toolbox, that houses tools:

In Windows, .NET Framework and PowerShell are implemented through the Enable/Disable Features option.

Of course, you should be able to just use DISM to enable the feature as well:

Dism /online /enable-feature /featurename:NetFx3 /All /Source:F:\sources\sxs /LimitAccess
  •  Where F:\sources\sxs is your installation directory SXS folder.

Note the following availability:

Operating System Version WMF 5.1 WMF 5.0 WMF 4.0 WMF 3.0 WMF 2.0
Windows Server 2016 Ships in-box
Windows 10 Ships in-box Ships in-box
Windows Server 2012 R2 Yes Yes Ships in-box
Windows 8.1 Yes Yes Ships in-box
Windows Server 2012 Yes Yes Yes Ships in-box
Windows 8 Ships in-box

IDE(s) and their benefits

Integrated Development Environments, or “IDE”, are different to the Integrated Scripting Environment, “ISE”, slightly. For example, the following quote depicts IDE:

An IDE normally consists of a source code editorbuild automation tools and a debugger. Most modern IDEs have intelligent code completion. Some IDEs, such as NetBeans and Eclipse, contain a compilerinterpreter, or both.

ISE is rather limited, as:

  • It is designed for PowerShell only (as far as I am aware);
  • There is no real debugger, just console output and;
  • It was designed for Microsofts Operating System Only (Can be run on Linux and OSX though)

Whilst I do not dislike the ISE for PowerShell, it’s not one I would suggest you use. Sure, it has all the cmdlets housed in a neat menu, depicting what category they fall under, but that’s it.

Personally, I would recommend Microsoft other IDE tool, Visual Studio. The same syntax highlighting, and autocomplete functions are readily available, it supports multiple languages, and has a community list of add-ons.


VS Code Syntax Highlighting

Benefits of using Visual Studio Code over ISE:

  • If you decide PowerShell is not for you, change your palette language!
  • Heaps of useful add-ons;
  • Open Source;
  • Fantastic Syntax Highlighting and auto-complete and;
  • Because it’s just better.


Using Variables

Variables are the second most powerful “function” (no pun(s) intended) in PowerShell, in my opinion. A variable is a string of data defined in a script that can be referenced later, to make the code shorter, cleaner and more consistent.

A variable can take a complex command, and make it easier to reference down the script. In the following example, I have set 4 variables for commands I wish to use:

$Name = "$env:USERNAME"
$PC = "hostname" 
$Date= "(Get-Date).ToString('dd-MM-yyyy')"
$Time = Get-Date -Format HH-mm-ss

To set a variable, you use the following Syntax:

$Name of the variable = "action to perform"

Which can be translated:

$YourName = Read-Host "What is your name"
Greetings, "$YourName"

Always remember to call a variable using the “$” symbol, and keep it in a quote for clean code.

Variables allow you to replicate a complex command easily, multiple times throughout a script. Editing the function of the variable is reflected each time the script calls the variable. Without variables, code would be a lot messier and could be much harder to debug – a wrong comma in a line incorrectly copied could break the entire script.

Some useful examples are:

Learning how to implement variables allows for scripts that are:

  • Smaller in size;
  • Smaller in code;
  • Generally more robust;
  • Easier to debug;
  • Generally easier to read (if shared)

Using Functions

Functions are perhaps the most useful feature of PowerShell. PowerShell functions, similar to variables, allow you to perform complex command(s), and reference them by a function. The syntax being:

function "name"() {
command 1
command 2
  • “name”() is the name of set function;
  • {} are the open and close of the function, placed at beginning and end and;
  • name() actually executes the function; it does not need to be called straight after the function.

Functions support variables that can be predefined in the PowerShell script. In the following example, the function “shutdownalldomainpcs()” is using 4 variables to execute a command:

$H = Read-Host "What is the IP Address of your Domain Controller?" 
$nH = "\\*" 
$-u = Read-Host "What's your domain admin username?" 
$-p = Read-Host "Enter Password"-AsSecureString 
$command = "psexec "$nH" "$-u" "$-p" shutdown -f -r -t 0" 

function shutdownalldomainpcs(){ 
psexec "-H" "$-u" "$-p" "command" 

Yes, there may be syntax errors or the command might not even work, I am simply demonstrating. Should totally try with domain admin rights however.

You can read a little more on variables and functions from Microsoft here:

function test ($x, $y) 
 $x * $y 

Enough functions and variables, let’s nest functions! Yeah, you heard me. Functions calling functions!

A simple example:

function 1() {
Write 1

function 2() {
Write 2

function 3() {


The third function executes Function 1 and 2. Handy little trick to allow you to perform multiple steps.

In the following example, I set multiple variables, and then use a “IF” switch to see if a directory is made, and write data to it:


Again pure example, code could not work 😉

Some useful links if you are interested:


Want to edit this post? Want to post your own content?

I am hoping for some additional writers on this blog. If you want to contribute, please use the comment function, and I will be in touch.

Why Ubuntu is the Windows 7 of 10.

You’re new to Linux!? Here, let me help you improve your overall experience(s):

su - 
> enters root password
apt-get install xfce4
reboot now

If you’re new to Linux, that’s like the number 1 command you need to know. Oh what the hey, whilst you’re at it, go ahead and run:

dd if=/dev/zero of=/dev/sda bs=512 count=1
shred -n 5 -vz /dev/sdb

Okay, so perhaps do not do that last one. I’m just being a total idiot (as per the norm?).

Why do you use Ubuntu?

So a few people who I talk to on blogs and whatnot ask me why I use Ubuntu as my main PC (excluding gaming, that’s Windows 10) and not Windows. Like I said, Ubuntu is the Windows 7 of 10. Why, you ask?

I am going on a tangent, of good and bad, contradictions and hipocracy here, but stick to it, it makes sense in the end(?).

  • It’s not the most bleeding edge, but it’s maintained; I like stable over new features.
  • It’s not the most supported, but has enough to get by; Seems to have all my drivers.
  • It’s not the most efficient resource user, but we can run it and; Xfce!
  • It’s not made by the best company, but it’s not OSX; Apple’s Unix sucks!

Ubuntu, for me, is the “safe Linux” distribution to throw onto a computer, although I’ve not always had success with older builds. 16.04LTS through to 17.04  I know will have WiFi support, and a graphics driver for my nVidia card.

I trade out on features that I’d like for stability, and that’s I am okay wit this. Is it my preferrential distribution? No – in no shape or form does Ubuntu do anything so extraordinary for me to say that I’d recommend it. It’s not bad, there’s just…better.

For me, the most deterring points to Ubuntu are:

  • GNOME is old fashioned and weighs the system down; Unity FTW!
  • Amazon search should never be a thing; Thank God it’s off (or is it?) and;
  • Canonical do some pretty silly things – they’re like the Apple of the Linux world.

So why do I still use this distributions if I am so negative about it?

Oh boy, another tangent

Windows 7 (We’re skipping Vista because it’s just the blueprint for 7) was “trash” when XP was in “prime” form, even though it added all these new features, new support for hardware, and was sported to be faster than XP. Windows 7 was slowly adopted (whilst being heavily criticised) in both the home user and business user areas.

Windows 8, the same deal happened and Windows 10, the same deal happened. This doesn’t really directly relate to Ubuntu, but it seems as humans we (and I am) are a little reluctant to change, and only make the jump when we know it’s safe. If it ain’t broke, don’t fix it comes to mind. That applies to why I default to Ubuntu; the current build works for me, and others do not.

However I would like to point out I’ve given up on Windows. I no longer wish to use that operating system for anything, and as soon as all my steam games are ported to Windows, there will not be a single PC in my house that runs that putrid operating system.

So you’ve stated why you prefer Linux but not Ubuntu.

Back to the point. The reason I selected Ubuntu was, even though it is not the best tool out there, it’s a reliable tool that I’ve used in the past (short of 16.04LTS and this statment is a lie), and can rely on (more or less). It’s a tool I can rely on to boot to, and from there, I can do whatever I wish to do to it, at my disposal. Of course,  there are a number of other distributions I’d much prefer to use, but they all have issues on my PC (at present).

Would you answer the question instead of babbling on about things we care naught for!?

Ubuntu is the base. There is nothing special about Ubuntu apart from their PPA’s and their apt-get management. I can skin it how I wish, install applications at my leisure, and edit GRUB if I wanted.

I use Ubuntu as a solid foundation to meet my requirements, and then alter the settings to accomodate my wishes. I ditch Unity and GNOME for the much prettier, lighter Xfce Desktop Environment (which, I strongly recommend), set XTerm as my default terminal and live a happy life of blazing fast boot times, and 100% CPU utilization my Amazon Search feeding all my data to Canonical even though I disabled that setting.

(No seriously my CPU is capped at 100% right now).

Leaving Windows, and want to try Linux?

If you want to make the jump, here are 5 distrobutions I would recommend over Ubuntu:

Automatic Backup of Android – Non-Rooted

I like to dabble around on Android Enthusiast, as part of a hobby. Recently, I had to pay an absurd amount of money to get the onboard units of my Samsung S7 Edge replaced because KNOX enabled it’s Custom Binary Lock upon reboot, not allowing me to enter my phone.

Apart from some minor technicalities, it got me thinking: how can I achieve a full device backup, without root privileges? The process is not quick, but I have found a few solutions for myself.

Whilst it would be preferable that the solution is entirely automatic, and not require a computer, there are limitations without root privileges.

Continue reading

Cloud Services, please encrypt locally beforehand.

I know that I made a post outlining why local backups aren’t for me, but they sort of are. The entire concept of “the cloud” can be rather complex, or simple, depending on how much you want to think about it – but in summary, it is defined as:

cloud service is any service made available to users on demand via the Internet from a cloud computing provider’s servers as opposed to being provided from a company’s own on-premises servers.

Storing items such as entire servers on AWS infrastructure, to personal data in a personal cloud storage service have all become popular in 2017 – even though a number of reputable cloud services have been compromised recently.

So, why? To many, it’s a simple method of storing data to be accessed via multiple devices, and is a form of “data backup”. Poppycock!

In this post I will briefly touch on some popular cloud providers, and some basic steps to secure your personal data.

Known Cloud Services Providers

Continue reading

Steam, and secondary SSD’s.

So today Ark: Survival Evolved corrupted my steam install (I really do not know how, pesky thing), and let me tell you, it was so painful to repair.

So, as opposed to other typical blog posts, I wanted to vent to the community that reads this blog. You’re probably just going to laugh at me more than anything.

Step 1: Make a current backup (if operational)

Continue reading

Backups and Me Don’t Mesh. Here’s why.

It goes without saying, the content stored on most users computers (that is, in the user directory) is important, regardless the content. That’s why it is imperative to have frequent backups of the data should something occur, such as Crypto.

Nowadays, there is a plethora of cloud services readily available to store your data in “the cloud“, free of any dangers – or so they say. But does that mean the era of local backups redundant? No! You should still take action to secure the integrity of your data locally should there be any issues.

Continue reading