Conto2.3 – Log with PowerShell

As said during the design [1], all the business logic related to the ET part will be in PowerShell. For people which may not know this language I can anticipate it is Microsoft product which can also be installed and used on Linux [2] even I’m not sure if it would make sense. This will give a more flexible and evoluted experience than the regular command line.

Start coding in PowerShell is very easy and you just need an Editor. I would suggest to use Visual Studio Code [3] which is free, light, it support a lot of languages and is also providing a debug mode which may help you a lot in your troubleshooting.

Coming back to our topic on first we need to design a way to log (on screen and on file) the actions the script is doing. To do that we’ll create an ad-hoc function named WriteLog which will print in the shell and on file what we desire.

  • Screen writing will be covered by Write-Host “text to write”
  • File writing will be covered by Add-content File -value “text to write

In the file output we are also adding a timestamp for each row we write.

function WriteLog
{
    Param (
       [Parameter(Mandatory)]    
       [string]$LogString
    )

    $Stamp = (Get-Date).toString("yyyy/MM/dd HH:mm:ss")
    $LogMessage = "$Stamp - $LogString"
    Add-content $LogFile -value $LogMessage
    Write-Host $LogString -ForegroundColor White
}

As first statement you can find the word Param: this is used to delcare the parameter for the function. In this case it is the text to log and it is marked as Mandatory since there is no log without a text to log.

Let’s try it out then

WriteLog "Text to write"

Executing the script from Commnad Line (or in the Visual Studio Code IDE) we’ll get the text we are expecting the response of the shell.

Esempio di WriteLog

Good but not enough since we want also to write to a phisical file and to do this we need to identify the file to write and place it in the same folder where the script is located.

$myDir = Split-Path -Parent $MyInvocation.MyCommand.Path
$Logfile = "$myDir\log_$($env:computername)_$((Get-Date).toString("yyyyMMdd_HHmmss")).log"

In the first row we are getting the path of the folder of the script and store it in the variable . In the second line we are adding the file Log name composed as the machine name plus a timestamp in order to write a different file at each run.

Running again the script we now have a file log in the folder.

Created file log
File log content

Please take note on how is simple to do a string concatenate in PowerShell using variables: $LogMessage = “$Stamp – $LogString” we are just writing the variables (starting with $) within the text. Quick and smart.

Resuming: we just created a first version of the PowerShell Script which is logging at screen and on file a text sample. You may see that to easy read it the function WriteLog is located on top of the script while it will be invoked only later in the bottom part. Thi is not mandatory, the script still works if you put the function declaration after the piece invokin it. It is just a convention.

#begin FUNCTIONS
function WriteLog
{
    Param (
        [Parameter(Mandatory)]    
        [string]$LogString
    )
    $Stamp = (Get-Date).toString("yyyy/MM/dd HH:mm:ss")
    $LogMessage = "$Stamp - $LogString"
    Add-content $LogFile -value $LogMessage
    Write-Host $LogString -ForegroundColor White
}

#end FUNCTIONS

# Get the current Directory
$myDir = Split-Path -Parent $MyInvocation.MyCommand.Path
#Set the file log name
$Logfile = "$myDir\log_$($env:computername)_$((Get-Date).toString("yyyyMMdd_HHmmss")).log"

WriteLog "Text to write"

You can download here [4] the zip with the script to try it on your own.

[1] https://www.beren.it/en/2022/01/07/conto2-3-the-design/

[2] https://docs.microsoft.com/it-it/powershell/scripting/install/installing-powershell-on-linux?view=powershell-7.2

[3] https://code.visualstudio.com/download

[4] https://www.beren.it/wp-content/uploads/2022/01/CONTO2.3-01.zip

Conto2.3 – The design

If you land to this post probably it’s because you have some interest on the topic (see all the requirements here) and we can now move to define more in details how this process should work:

  1. define how to structure the input/output focusing on who does what
  2. define how to configure the above in a way which will gave us an high level of scalability and reusability
  3. design a logging system in order to monitor and analyze which are the actions taken by the system helping the troubleshoting

1) Input and Ouput Definition

Once the information are extracted from the source they need to be exported to an OUTPUT folder also divided in two different folders which will contains all the original files in the same format and without any difference based on the owner, and this will simplify a lot the Exel job

Configurazione Cartelle di output

The last step is on Excel itself which will load those files and will populate some spreadsheets in order to correctly display them in a proper Pivot table.

There is stil one part miss tough which is the Mapping. Who is mapping the information in the files and where this mapping is coming from? Unfortunately this is the most painful part, since my expectation is to manage the mapping in a totally separated Excel file where I can define whether a description is categorized in a way or another. This is the tricky part an the reason why Excel is not enough but we need PowerShell help. Basically, just right after the script run it will load all the mapping information and then it will reuse them to correctly map the expenses in a way easy to be managed by Excel.

2) How to configure the process

All what we said so far it just the base logic but in order to easely maintein and extend the process we need a configuration file which will tell where to find the files and the information within the files. This is key to avoid change code in case of any change in the sources.

3) How to log

Finally, in order to have a good level of details on what we the process did so far we need a way to log the actions for an easy troubleshooting

Root Folder

To resume:

  • Config.xml: it’s the file containing the configuration
  • Conto2.3.ps1: it’s the file where the ET (Extract and transform) logics will be performed
  • Conto2.3.xlsx: it’s the Excel file which will manage the L (Load) and create the Pivot
  • MasterCategories.xlsx: it’s the file used for the mapping during the ET phase



As said we would like to handle different file types coming from different sources: the bank account details, the credit card report and as well the owner: if me and my wife have different banks probably the source file will be different.

Questa immagine ha l'attributo alt vuoto; il nome del file รจ image-1.png
Source folder architecture

Theorically I would add asd much owner as I want, keeping a structure which may present the bank on first and then other sources. In thos folder I would also load the files as is without doing any manipulation

A PowerShell script which will be in the root folder will load those files and will normalize them. When I say normalize I mean to find a set of columns which are mandatory for the system:

  • Data Operazione: the time of the spend
  • Causale: brief indication of the reason
  • Descrizione: a detailed info around the spend
  • Ammontare: it is the the value of the spend
  • Accredito: I noticed that in some cases we have found inbound placed in a different column from the outbound (Ammontare)

Once the information are extracted from the source they need to be exported to an OUTPUT folder also divided in two different folders which will contains all the original files in the same format and without any difference based on the owner, and this will simplify a lot the Exel job

Configurazione Cartelle di output

The last step is on Excel itself which will load those files and will populate some spreadsheets in order to correctly display them in a proper Pivot table.

There is stil one part miss tough which is the Mapping. Who is mapping the information in the files and where this mapping is coming from? Unfortunately this is the most painful part, since my expectation is to manage the mapping in a totally separated Excel file where I can define whether a description is categorized in a way or another. This is the tricky part an the reason why Excel is not enough but we need PowerShell help. Basically, just right after the script run it will load all the mapping information and then it will reuse them to correctly map the expenses in a way easy to be managed by Excel.

2) How to configure the process

All what we said so far it just the base logic but in order to easely maintein and extend the process we need a configuration file which will tell where to find the files and the information within the files. This is key to avoid change code in case of any change in the sources.

3) How to log

Finally, in order to have a good level of details on what we the process did so far we need a way to log the actions for an easy troubleshooting

Root Folder

To resume:

  • Config.xml: it’s the file containing the configuration
  • Conto2.3.ps1: it’s the file where the ET (Extract and transform) logics will be performed
  • Conto2.3.xlsx: it’s the Excel file which will manage the L (Load) and create the Pivot
  • MasterCategories.xlsx: it’s the file used for the mapping during the ET phase