Category Archives: Code

Bash, PowerShell, and useful scripting tips

Command-line: Tools for Windows update

When you want to apply Windows Update by working around a GPO in place, or remotely on a computer without having to RDP on it, knowing some command line commands might be useful.

However it’s hard to find the perfect way to do it, actually, I am still looking for it. I found the following possibilities, but none of them is perfect.

Applying updates remotely, from a PSSession

This command will trigger the installation of pending updates, remotely, yet you won’t be able to see what’s going on as there is no output

USOCLIENT.EXE RefreshSettings StartScan StartDownload StartInstall 

The good thing about this one is it able to apply feature upgrades (From one Windows version to another one). You will need to ask the user to reboot and apply though (He should see the orange button while restarting his computer)

Another way, to apply updates, is to use the following Powershell module

With this method, it seems hard to apply them from a PSSession without getting access denied, I think I need to dig deeper to understand why. However, you can run it from the workstation if the policy in place denies you to do it from the GUI (as an administrator obviously):

# install the module, once for all
install-module pswindowsupdate

# trigger Windows update check
get-windowsupdate

# Finally, and this command does not work remotely, install them
install-windowsupdate

Install Terraform on Ubuntu 20.04

This post resume for my own convenience, what to do in order to install Terraform on an Ubuntu machine.

Add HasiCorp GPG key:

curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -

Add the repository:

sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"

Eventually, update and install the terraform package:

 sudo apt-get update && sudo apt-get install terraform

How to control your Windows Server’s cipher suites with IIS Crypto

When you are in charge of fixing vulnerabilities or troubleshooting software encrypted communication issues, you often have to deal with upgrading or fixing cipher suites. It’s often complex depending on the vendor, to access, customize or even know which cipher suites are available.

For Windows Server, a company called Nartac provides a free tool called IIS Crypto, that will help you configure your servers security in a snap!

Using IIS Crypto with a GUI

Nartac offers two versions of its tool, the one which come the GUI and the CLI version. I would recommand to install the GUI version to get familiar with it, you will see which suites and schannel are available on your system, understand how the product works, and finally you will be able to create custom templates to use with the GUI or, even better, with the CLI.

IIS Crypto GUI

IIS Crypto CLI

Once you’re comfortable with IIS Crypto, and especially if you have many servers to manage, I would highly recommend going with the CLI version.

You can deploy IIS Crypto through chocolatey and then apply a pre-existing template, or a custom one depending on your needs:

Here, I apply an embedded template (Strict) while asking for a reboot for this template to be applied immediately

Git – memory helper (Work in progress)

This is a very simple post about some commands I wanted to remember, especially the ones required to install and configure Git on a new computer. I will keep this post updated in the future, so feel free to come again from time to time.

How to push your project code to your new Git server on Windows

First, you need to install Git on your Windows machine, I’m using chocolatey for that task:

choco install git

Then we need to add our remote repository with the following commands:

# Move to your project folder
cd D:\mydir

# Initialize Git
git init

# Add a connection to your remote Git server
git remote add my-app git@10.0.0.77:my-app.git

Now, we need to push the content over the remote repo. Before doing that, we need to add our project’s files to the local repo, them commit those changes and push the whole thing eventually:

# Add all our projects files
git add .

# Commit those new files to the local repo, the message is mandatory
git commit -m 'Initial commit'

# Finally, push everything to the remote Git
git push my-app master

How to deploy Git “client” on Linux

Install the package, here for Ubuntu:

apt get install git

Go to the folder you want, then run the following commands:

cd /var/www/mydir

# Initialize Git
git init

# Add a connection to your remote Git server
git remote add my-app git@10.0.0.77:my-app.git

# Pull the data from the master branch of the remote Git server
git pull my-app master

Chaining grep commands after tail -f

Sometimes, I need to filter out real-time logs on server which is not taking part of a log aggregation tool, such as Graylog or ELK.

Sometimes it’s also very convenient and quick to run a command and see live what’s going on.

To see any new line added to a log file, you should already know the tail function:

tail -f /var/log/mail.log

Unfortunately, you won’t be able to use this command with more that on command chained, for instance:

# The following command will work:
tail -f /var/log/mail.log | grep from=

# The one below won't show you an error, but won't display anything as well:
tail -f /var/log/mail.log | grep from= | grep me@domain.com

Some command, like grep, comes with a specific directive that can workaround this issue: –line-buffered

Not all tools have it though, for example, with cut you will have no dice. If only one command you would use is not providing a way to do that, use it at the end.

Let’s make a quick example, if I wan to use two greps command and a cut, I can do:

tail -f /var/log/mail.log | grep --line-buffered from= | grep --line-buffered -v -e "from=<>" -e "root" | cut -d ':' -f 4,5

This will show me the 4 and 5 fields of any new line added to mail.log, which contains the expression “from=”, filtering out any empty sender or root (“from=<>” and “root”). So I will get output like that one:

33FEA13B970: from=<user@domain.com>, size=467, nrcpt=1 (queue active)

How to use the cut command on Linux Bash

Often, you will have to deal with strings in your scripts. For instance, parsing Syslog files to find out and graph very specific information. And when it comes to string manipulation, the cut command can be very useful!

Cutting by byte position

This mode will allow you to cut from a position to another, let say between the first and the fourth-byte position. If your string includes a special char that uses more a byte, cutting with this mode can be tricky. As a result, I don’t use that mode, yet it’s good to know for very specific usage:

# This will return M
echo 'My first string' | cut -b 1 

# This will return Mf
echo 'My first string' | cut -b 1,4

# This will return My first
echo 'My first string' | cut -b 1-8

Cutting by character

This is the one I use the most, that is the same command but it doesn’t take into account the size of each character, only their position, so no bad surprise:

# This will return M
echo 'My first string' | cut -c 1 

# This will return Mf
echo 'My first string' | cut -c 1,4

# This will return My first
echo 'My first string' | cut -c 1-8

Cutting by delimiter

Here comes the real power of cut, using a delimiter made from a single character will allow you to do great things. In addition to that delimiter, you need to provide the field number, see the following to better understand.

Let’s take a more complex example to show you what is possible. Imagine you want to list all the bash used by the users on your system. This information can be found on /etc/password.

First, let see what looks like an entry located in /etc/passwd :

smar:*:101:1:Seb Mar:/home/smar:/usr/bin/zsh

We have 7 fields, the one we want is the last one. All those fields are separated by a semicolon. So, it’s pretty simple “:” will be our delimiter and we will ask for the 7th field:

# This will show you the list of all the shell defined in /etc/passwd
# With redundancy though
cat /etc/passwd | cut -d ':' -f 7

To finish properly our little exercise, let’s filter that to avoid duplicate using sort:

# This will show you the list of all the shell defined in /etc/passwd
# Without duplicates!
cat /etc/passwd | cut -d ':' -f 7 | sort -u

Please note that extracting more than a single field will return the selected fields separated by the same delimiter. For instance:

# The command below will return smar:/usr/bin/zsh
cat "smar:*:101:1:Seb Mar:/home/smar:/usr/bin/zsh" | cut -d ':' -f 1,7

However, if you want to replace this delimiter, you can use the –output-delimiter directive:

# The command below will return smar,/usr/bin/zsh
cat "smar:*:101:1:Seb Mar:/home/smar:/usr/bin/zsh" | cut -d ':' -f 1,7 --output-delimiter=','

Using complement to reverse the result

If you need the opposite result of your filter, cut provide the –complement directive to achieve that:

# While this will return My first
echo 'My first string' | cut -c 1-8

# This will return string
echo 'My first string' | cut -c 1-8 --complement

I hope those tips will help you scripting more efficiently!

Using Terraform with Azure

This post is a sticky note about how to configure your workstation in order to use Terraform with Microsoft Azure.

Configure Azure CLI on your station

First of all, you need to install this client to be able to connect to your Azure subscription. To do so, download the software from the Microsoft website. For Windows users, it’s a simple MSI package to install, then run a PowerShell console to run the login command as follow:

az login

This command will open your browser, and ask you to log in to your Azure account.

Once’s validated, you will be able to see your subscriptions within the PowerShell console. Take note of the tenant ID and the ID of the subscription you want to use further if you have many subscriptions as below:

az account list                                   [
  {
    "cloudName": "AzureCloud",
    "homeTenantId": "xxx-xxx-xxx-xxx-xxx",
    "id": "xxx-xxx-xxx-xxx-xxx",
    "isDefault": true,
    "managedByTenants": [],
    "name": "Pay-As-You-Go",
    "state": "Enabled",
    "tenantId": "xxx-xxx-xxx-xxx-xxx",
    "user": {
      "name": "xxx@xxx.xxx",
      "type": "user"
    }
  },
  {
    "cloudName": "AzureCloud",
    "homeTenantId": "xxx-xxx-xxx-xxx-xxx",
    "id": "xxx-xxx-xxx-xxx-xxx",
    "isDefault": false,
    "managedByTenants": [],
    "name": "Pay-As-You-Go",
    "state": "Enabled",
    "tenantId": "xxx-xxx-xxx-xxx-xxx",
    "user": {
      "name": "xxx@xxx.xxx",
      "type": "user"
    }
  }
]

For information, you can use the command below to see your subscriptions:

az account list

That’s all for the Azure client, pretty simple, isn’t it?

Configuring Terraform on your station

This step is easy too, really, just browse Terraform web page to get the version that suits you. Keep in mind that, depending on the cloud provider (Azure in that very case), a version can or cannot be compatible with the provider. At this time, I’m using Terraform v0.12.21

OK, so unpack the archive to the folder of your choice. From here you can use the previous PowerShell console and browse to this folder. This is optional, but if you want to be able to run terraform from everywhere, you will need to deal with your environment variables. I won’t describe the process in this post though.

Open the Terraform folder with your file browser, then create inside a file called main.tf This file will contain the very basic information in order to connect to Azure:

provider "azurerm" {
  version = "=1.44.0"

  subscription_id = "xxx-xxx-xxx-xxx-xxx"
  tenant_id       = "xxx-xxx-xxx-xxx-xxx"
  skip_provider_registration = true
}

resource "azurerm_resource_group" "my-ressource-group" {
  name     = "my-ressource-group"
  location = "US West"
}

The first declaration is about the provider required to connect and manage Azure cloud services. Terraform strongly recommend using it to pin the version of the Provider being used. So this value will change in the future. Then set the subscription_id and tenant_id variables with the information gathered earlier. The skip_provider_registration is set to true here due to some restricted permissions I have, that might not be the case for you though.

If you have already created a resource group, you can also add the following declaration to your file:

resource "azurerm_resource_group" "my-resource-group" {
  name     = "my-resource-group"
  location = "US West"
}

Just keep in mind that Terraform, sadly, won’t import the resources you defined automatically. You will have to do that manually.

Now, run the following command to initialize Terraform, making it downloading the provider defined and creating the first local state file.

terraform.exe init

terraform.exe init

Initializing the backend...

Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "azurerm" (hashicorp/azurerm) 1.44.0...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Storing Terraform State in an Azure storage

You should see in the Terraform folder a file called terraform.tfstate which contains a representation of what you have in Azure.

If you are planning to work with a team, you should use a common file shared across your people. For that, we are going to create a storage account and a container to put this terraform.tfstate file in it. Note that Azure is handling will manage the lock on that file automatically, avoiding multiple access at the same time from your coworkers. To do so, create a storage-accounts.tf file (choose the name that suits you the best):

resource "azurerm_storage_account" "my-storage" {
  name                     = "my-storage"
  resource_group_name      = azurerm_resource_group.my-resource-group.name
  location                 = azurerm_resource_group.my-resource-group.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
}

resource "azurerm_storage_container" "my-storage" {
  name                  = "tfstate"
  storage_account_name  = azurerm_storage_account.my-storage.name
  container_access_type = "private"
}

Here, we have defined a storage account and a container within, I let you check the documentation about the replication type.

Now, let’s create those resources with the simple command below:

terraform.exe apply

This command will check on Azure to see which change (update, creation or deletion) it has to do. In the end, it will show you the plan, asking you for a yes in order to apply. Type yes and see the magic happens.

Now, you have a container to put your tf file in, however, we need to update our main.tf file in order to use it. Edit the file by adding the following content after the provider definition:

terraform {
  backend "azurerm" {
    resource_group_name   = "my-resource-group"
    storage_account_name  = "my-storage"
    container_name        = "tfstate"
    key                   = "terraform.tfstate"
  }
}

The key parameter represents the file in your local Terraform folder. Run a new init this time, to apply those change:

terraform.exe init

Now you’re being serious, you can work from different places on the same Azure subscription, that’s a good start.

From there, you can go on the Terraform’s documentation page and see what the Azure provider can do for you.

The following script are just reminders for me, you might not be interrested.

Virtual network, route table, and subnet

resource "azurerm_virtual_network" "my-network" {
  name                = "my-network"
  address_space       = ["10.62.34.128/25"]
  location            = azurerm_resource_group.my-resource-group.location
  resource_group_name = azurerm_resource_group.my-resource-group.name
}

resource "azurerm_route_table" "my-route-table" {
  name                          = "my-route-table"
  location                      = azurerm_resource_group.my-resource-group.location
  resource_group_name           = azurerm_resource_group.my-resource-group.name
}

resource "azurerm_subnet" "my-subnet" {
  name                 = "my-subnet"
  resource_group_name  = azurerm_resource_group.my-resource-group.name
  virtual_network_name = azurerm_virtual_network.my-network.name
  address_prefix       = "10.62.34.128/27"
  route_table_id       = azurerm_route_table.my-route-table.id
} 

A basic Linux machine

resource "azurerm_network_interface" "my-interface" {
  name                = "my-interface"
  location            = azurerm_resource_group.my-resource-group.location
  resource_group_name = azurerm_resource_group.my-resource-group.name

  ip_configuration {
    name                          = "ipconfig1"
    subnet_id                     = azurerm_subnet.my-subnet.id
    private_ip_address_allocation = "Static"
	private_ip_address 			  = "10.62.34.136"
  }
}

resource "azurerm_virtual_machine" "my-linux-vm" {
  name                  = "my-linux-vm"
  location              = azurerm_resource_group.my-resource-group.location
  resource_group_name   = azurerm_resource_group.my-resource-group.name
  network_interface_ids = [azurerm_network_interface.my-interface.id]
  vm_size               = "Standard_DS1_v2"

  # Uncomment this line to delete the OS disk automatically when deleting the VM
  delete_os_disk_on_termination = true

  # Uncomment this line to delete the data disks automatically when deleting the VM
  delete_data_disks_on_termination = true

  storage_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "18.04-LTS"
    version   = "latest"
  }
  storage_os_disk {
    name              = "my-linux-vm-dsk01"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
	disk_size_gb	  = 50
  }
  os_profile {
    computer_name  = "my-linux-vm"
    admin_username = "myUser"
    admin_password = "ASup3r53cr3t!!@"
  }
  os_profile_linux_config {
    disable_password_authentication = false
  }
  tags = {
    type = "A tag I want"
  }
}