Learning PowerShell with Michael.

At the present, I am refining my PowerShell usage, updating my scripts to make the code more readable and slowly learning new methods to do things easier, and faster. I’ve been on several forums relating to PowerShell and am quite active (you may have found this blog from there?), and thought I’d make my own post.

Whilst I’ll attempt to be as thorough as possible (we all know I do not vet my own documents), this shall not be an all-encompassing guide/post on PowerShell. The post will briefly cover:

  1. What is Windows Management Framework 5.0?
  2. IDE(s) and their benefits
  3. Using Variables
  4. Using Functions

So, let’s get into it.

What is Windows Management Framework 5.0?

The technical answer is:

Windows Management Framework (WMF) is the delivery mechanism that provides a consistent management interface across the various flavors of Windows and Windows Server.

Source

In easier terminology, it is a distinct sub-set of Windows tools designed for automation, maintaining and auditing Windows PC(s), and primarily, Windows Servers.

Think of WMF as a toolbox, that houses tools:

In Windows, .NET Framework and PowerShell are implemented through the Enable/Disable Features option.

Of course, you should be able to just use DISM to enable the feature as well:

Dism /online /enable-feature /featurename:NetFx3 /All /Source:F:\sources\sxs /LimitAccess
  •  Where F:\sources\sxs is your installation directory SXS folder.

Note the following availability:

Operating System Version WMF 5.1 WMF 5.0 WMF 4.0 WMF 3.0 WMF 2.0
Windows Server 2016 Ships in-box
Windows 10 Ships in-box Ships in-box
Windows Server 2012 R2 Yes Yes Ships in-box
Windows 8.1 Yes Yes Ships in-box
Windows Server 2012 Yes Yes Yes Ships in-box
Windows 8 Ships in-box

IDE(s) and their benefits

Integrated Development Environments, or “IDE”, are different to the Integrated Scripting Environment, “ISE”, slightly. For example, the following quote depicts IDE:

An IDE normally consists of a source code editorbuild automation tools and a debugger. Most modern IDEs have intelligent code completion. Some IDEs, such as NetBeans and Eclipse, contain a compilerinterpreter, or both.

ISE is rather limited, as:

  • It is designed for PowerShell only (as far as I am aware);
  • There is no real debugger, just console output and;
  • It was designed for Microsofts Operating System Only (Can be run on Linux and OSX though)

Whilst I do not dislike the ISE for PowerShell, it’s not one I would suggest you use. Sure, it has all the cmdlets housed in a neat menu, depicting what category they fall under, but that’s it.

Personally, I would recommend Microsoft other IDE tool, Visual Studio. The same syntax highlighting, and autocomplete functions are readily available, it supports multiple languages, and has a community list of add-ons.

to0c68u

VS Code Syntax Highlighting

Benefits of using Visual Studio Code over ISE:

  • If you decide PowerShell is not for you, change your palette language!
  • Heaps of useful add-ons;
  • Open Source;
  • Fantastic Syntax Highlighting and auto-complete and;
  • Because it’s just better.

 

Using Variables

Variables are the second most powerful “function” (no pun(s) intended) in PowerShell, in my opinion. A variable is a string of data defined in a script that can be referenced later, to make the code shorter, cleaner and more consistent.

A variable can take a complex command, and make it easier to reference down the script. In the following example, I have set 4 variables for commands I wish to use:

$Name = "$env:USERNAME"
$PC = "hostname" 
$Date= "(Get-Date).ToString('dd-MM-yyyy')"
$Time = Get-Date -Format HH-mm-ss

To set a variable, you use the following Syntax:

$Name of the variable = "action to perform"

Which can be translated:

$YourName = Read-Host "What is your name"
Greetings, "$YourName"

Always remember to call a variable using the “$” symbol, and keep it in a quote for clean code.

Variables allow you to replicate a complex command easily, multiple times throughout a script. Editing the function of the variable is reflected each time the script calls the variable. Without variables, code would be a lot messier and could be much harder to debug – a wrong comma in a line incorrectly copied could break the entire script.

Some useful examples are:

Learning how to implement variables allows for scripts that are:

  • Smaller in size;
  • Smaller in code;
  • Generally more robust;
  • Easier to debug;
  • Generally easier to read (if shared)

Using Functions

Functions are perhaps the most useful feature of PowerShell. PowerShell functions, similar to variables, allow you to perform complex command(s), and reference them by a function. The syntax being:

function "name"() {
command 1
command 2
etc
}
name()
  • “name”() is the name of set function;
  • {} are the open and close of the function, placed at beginning and end and;
  • name() actually executes the function; it does not need to be called straight after the function.

Functions support variables that can be predefined in the PowerShell script. In the following example, the function “shutdownalldomainpcs()” is using 4 variables to execute a command:

$H = Read-Host "What is the IP Address of your Domain Controller?" 
$nH = "\\*" 
$-u = Read-Host "What's your domain admin username?" 
$-p = Read-Host "Enter Password"-AsSecureString 
$command = "psexec "$nH" "$-u" "$-p" shutdown -f -r -t 0" 


function shutdownalldomainpcs(){ 
psexec "-H" "$-u" "$-p" "command" 
}
shutdownalldomainpcs

Yes, there may be syntax errors or the command might not even work, I am simply demonstrating. Should totally try with domain admin rights however.

You can read a little more on variables and functions from Microsoft here:

function test ($x, $y) 
 {
 $x * $y 
 }

Enough functions and variables, let’s nest functions! Yeah, you heard me. Functions calling functions!

A simple example:

function 1() {
Write 1
}

function 2() {
Write 2
}

function 3() {
1()
2()
}

3()

The third function executes Function 1 and 2. Handy little trick to allow you to perform multiple steps.

In the following example, I set multiple variables, and then use a “IF” switch to see if a directory is made, and write data to it:

test

Again pure example, code could not work 😉

Some useful links if you are interested:

 

Want to edit this post? Want to post your own content?

I am hoping for some additional writers on this blog. If you want to contribute, please use the comment function, and I will be in touch.

Why Ubuntu is the Windows 7 of 10.

You’re new to Linux!? Here, let me help you improve your overall experience(s):

su - 
> enters root password
apt-get install xfce4
reboot now

If you’re new to Linux, that’s like the number 1 command you need to know. Oh what the hey, whilst you’re at it, go ahead and run:

dd if=/dev/zero of=/dev/sda bs=512 count=1
shred -n 5 -vz /dev/sdb

Okay, so perhaps do not do that last one. I’m just being a total idiot (as per the norm?).


Why do you use Ubuntu?

So a few people who I talk to on blogs and whatnot ask me why I use Ubuntu as my main PC (excluding gaming, that’s Windows 10) and not Windows. Like I said, Ubuntu is the Windows 7 of 10. Why, you ask?

I am going on a tangent, of good and bad, contradictions and hipocracy here, but stick to it, it makes sense in the end(?).

  • It’s not the most bleeding edge, but it’s maintained; I like stable over new features.
  • It’s not the most supported, but has enough to get by; Seems to have all my drivers.
  • It’s not the most efficient resource user, but we can run it and; Xfce!
  • It’s not made by the best company, but it’s not OSX; Apple’s Unix sucks!

Ubuntu, for me, is the “safe Linux” distribution to throw onto a computer, although I’ve not always had success with older builds. 16.04LTS through to 17.04  I know will have WiFi support, and a graphics driver for my nVidia card.

I trade out on features that I’d like for stability, and that’s I am okay wit this. Is it my preferrential distribution? No – in no shape or form does Ubuntu do anything so extraordinary for me to say that I’d recommend it. It’s not bad, there’s just…better.

For me, the most deterring points to Ubuntu are:

  • GNOME is old fashioned and weighs the system down; Unity FTW!
  • Amazon search should never be a thing; Thank God it’s off (or is it?) and;
  • Canonical do some pretty silly things – they’re like the Apple of the Linux world.

So why do I still use this distributions if I am so negative about it?

Oh boy, another tangent

Windows 7 (We’re skipping Vista because it’s just the blueprint for 7) was “trash” when XP was in “prime” form, even though it added all these new features, new support for hardware, and was sported to be faster than XP. Windows 7 was slowly adopted (whilst being heavily criticised) in both the home user and business user areas.

Windows 8, the same deal happened and Windows 10, the same deal happened. This doesn’t really directly relate to Ubuntu, but it seems as humans we (and I am) are a little reluctant to change, and only make the jump when we know it’s safe. If it ain’t broke, don’t fix it comes to mind. That applies to why I default to Ubuntu; the current build works for me, and others do not.

However I would like to point out I’ve given up on Windows. I no longer wish to use that operating system for anything, and as soon as all my steam games are ported to Windows, there will not be a single PC in my house that runs that putrid operating system.

So you’ve stated why you prefer Linux but not Ubuntu.

Back to the point. The reason I selected Ubuntu was, even though it is not the best tool out there, it’s a reliable tool that I’ve used in the past (short of 16.04LTS and this statment is a lie), and can rely on (more or less). It’s a tool I can rely on to boot to, and from there, I can do whatever I wish to do to it, at my disposal. Of course,  there are a number of other distributions I’d much prefer to use, but they all have issues on my PC (at present).

Would you answer the question instead of babbling on about things we care naught for!?

Ubuntu is the base. There is nothing special about Ubuntu apart from their PPA’s and their apt-get management. I can skin it how I wish, install applications at my leisure, and edit GRUB if I wanted.

I use Ubuntu as a solid foundation to meet my requirements, and then alter the settings to accomodate my wishes. I ditch Unity and GNOME for the much prettier, lighter Xfce Desktop Environment (which, I strongly recommend), set XTerm as my default terminal and live a happy life of blazing fast boot times, and 100% CPU utilization my Amazon Search feeding all my data to Canonical even though I disabled that setting.

(No seriously my CPU is capped at 100% right now).

Leaving Windows, and want to try Linux?

If you want to make the jump, here are 5 distrobutions I would recommend over Ubuntu:

Cloud Services, please encrypt locally beforehand.

I know that I made a post outlining why local backups aren’t for me, but they sort of are. The entire concept of “the cloud” can be rather complex, or simple, depending on how much you want to think about it – but in summary, it is defined as:

cloud service is any service made available to users on demand via the Internet from a cloud computing provider’s servers as opposed to being provided from a company’s own on-premises servers.

Storing items such as entire servers on AWS infrastructure, to personal data in a personal cloud storage service have all become popular in 2017 – even though a number of reputable cloud services have been compromised recently.

So, why? To many, it’s a simple method of storing data to be accessed via multiple devices, and is a form of “data backup”. Poppycock!

In this post I will briefly touch on some popular cloud providers, and some basic steps to secure your personal data.

Known Cloud Services Providers

Continue reading

Password Security, from a man with no background, but a love for EnPass IO.

In the digital age, we store everything from photos of families, invoices and financial documents, to our login details to every service we utilizing, on a PC or smartphone, right? So, it is little question why security is a subject upon everyone’s lips. The security game keeps changing, but in 2016/7 a new crave hit my fancy, password managers.

The whole premise of a password manager is to store a key and username for accounts in an encrypted database, allowing the use of a master password to retrieve the credentials upon request – nifty.

Storing a local database that houses both the username and password to a multitide of accounts? Sounds risky, right? Not if you use the right methodology (or, tool)! There are a large array of posts about “should you trust password managers“, and I tend to be concerned about the security surrounding the product as well, but let me tell you the 5 key benefits to implementing a password manager as one solution to your security, that make it worth while:

  • Being able to use unique passwords per service reduces the risk of cross-service exposure should your password be leaked;
  • Being able to generate “strong” passwords based on requirements allows for a more randomized and secure approach to accounts;
  • Being able to ensure your passwords are stored in a centralized encrypted database, as compared to “passwords.txt”;
  • Allowing restricted access to your personal data (such as licenses, two-step codes etc.) in a restricted application amplifies security and;
  • Prevents you from forgetting passwords (and therefore, making easy-to-remember passwords, or repeating them cross-site)

Now, I would consider password managers as one layer to password security. When implementing a secure process for storing logins, one can never be too careful. For example, to ensure the integrity of my data stays secure (or at least, more secure), I implement the following approach to my digital accounts:

  • I use 1Password to store my usernames to services, with the password field being a reference;
  • I use EnpassIO to reference the password codename to the actual password and;
  • I use Google Authenticator to provide a 2-Step Authentication approach.

This ensures that without access to both databases, there is no ability to compromise my accounts – the master passwords to both are unique and not recreated for any other service.

So, by relying on 3 unique services to all work in cohesion with one another for access to my accounts, I have improved the security layers surrounding my accounts. It is, however, worth mentioning that implementing 2-Step Authentication adds another layer of complexity to the account process. We will post more about two-step authentication in future posts.

Continue reading

Backups and Me Don’t Mesh. Here’s why.

It goes without saying, the content stored on most users computers (that is, in the user directory) is important, regardless the content. That’s why it is imperative to have frequent backups of the data should something occur, such as Crypto.

Nowadays, there is a plethora of cloud services readily available to store your data in “the cloud“, free of any dangers – or so they say. But does that mean the era of local backups redundant? No! You should still take action to secure the integrity of your data locally should there be any issues.

Continue reading

RansomeWare, oh the joys it brings.

RansomeWare has been on an upward trend, notably so in Quarter 3 and 4 of 2016. The main targets shifted from phishing links with a drop of 50% (Source: Proofpoint) to RDP. According to Webroot, two thirds (66%) of Ransomeware Infections in Q1 2017 where delivered by RDP.

For those who are unfamiliar with the term, Ransomeware can be summarised as:

Ransomware is a type of malicious software that blocks access to the victim’s data or threatens to publish or delete it until a ransom is paid.

Source: RansomeWare – Wikipedia

However, RansomeWare is categorised  as a form of cryptoviral extortion; it is an act of CryptovirologyMoti Young published his findings of cryptoviral extortion (Cited entries can be read here) where the process was further discussed in 3 key phases:

  1. [attacker→victim] The attacker generates a key pair and places the corresponding public key in the malware. The malware is released.
  2. [victim→attacker] To carry out the cryptoviral extortion attack, the malware generates a random symmetric key and encrypts the victim’s data with it. It uses the public key in the malware to encrypt the symmetric key. This is known as hybrid encryption and it results in a small asymmetric ciphertext as well as the symmetric ciphertext of the victim’s data. It zeroizes the symmetric key and the original plaintext data to prevent recovery. It puts up a message to the user that includes the asymmetric ciphertext and how to pay the ransom. The victim sends the asymmetric ciphertext and e-money to the attacker.
  3. [attacker→victim] The attacker receives the payment, deciphers the asymmetric ciphertext with his private key, and sends the symmetric key to the victim. The victim deciphers the encrypted data with the needed symmetric key thereby completing the cryptovirology attack.The symmetric key is randomly generated and will not assist other victims. At no point is the attacker’s private key exposed to victims and the victim need only send a very small ciphertext to the attacker (the asymmetric ciphertext).

 

Looking at the latest WannaCry breakout, the process can be defined as the following 5 steps:

Trend Micro – WannaCry Blog Post

The process adopted here follows the ruleset of Moti’s assumption, whilst also leveraging SMB faults to spread through networks.

Further investigation on this fault will be documented at a later stage.

On a side note, WannaKey? This tool may help recover WannaCry files.

Continue reading