Thursday, 31 October 2024

Extracting Azure Logs from a Storage Account

 I was recently asked to look at the logs for an Azure Key Vault to extract all of the IP addresses that had been connecting to it.

This would have been a simple task if the logs were in a Log Analytics Workspace but they had been configured to go into a Azure Storage Account.

The problem with having the data in an Azure Storage Account is that it isn't structured in way that makes it easy to get at the data due to the number of folders:




Within each file the data isn't formatted as json, each row is a json row but the file itself isn't which means that we can't read in the entire file:

{ "time": "2024-10-31T16:04:55.4510338Z", "category": "AuditEvent", "operationName": "VaultGet", "resultType": "Success", "correlationId": "abcdef12-ab12-34cd-1234-abcdef123456", "callerIpAddress": "1.1.1.1", "identity": {"claim":{"http://schemas.microsoft.com/identity/claims/objectidentifier":"ea1ad6fd-e8e9-41d3-b6c8-f062fd2d36fb","http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn":"live.com#[email protected]","http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name":"live.com#xxxx.xx.xx","appid":"abcdef12-ab12-34cd-1234-abcdef123456"}}, "properties": {"id":"https://grussexamplekv.vault.azure.net/","clientInfo":"Mozilla/5.0","requestUri":"https://management.azure.com/subscriptions/abcdef12-ab12-34cd-1234-abcdef123456/resourceGroups/grussBlog01/providers/Microsoft.KeyVault/vaults/GrussExampleKV?api-version=2018-02-14","httpStatusCode":200,"properties":{"sku":{"Family":"A","Name":"Standard","Capacity":null},"tenantId":"abcdef12-ab12-34cd-1234-abcdef123456","networkAcls":{"bypass":"None","defaultAction":"Allow"},"enabledForDeployment":false,"enabledForDiskEncryption":false,"enabledForTemplateDeployment":false,"enableSoftDelete":true,"softDeleteRetentionInDays":7,"enableRbacAuthorization":true,"enablePurgeProtection":null}}, "resourceId": "/SUBSCRIPTIONS/abcdef12-ab12-34cd-1234-abcdef123456/RESOURCEGROUPS/GRUSSBLOG01/PROVIDERS/MICROSOFT.KEYVAULT/VAULTS/GRUSSEXAMPLEKV", "operationVersion": "2018-02-14", "resultSignature": "OK", "durationMs": "32"}
{ "time": "2024-10-31T16:15:03.5039974Z", "category": "AuditEvent", "operationName": "VaultGet", "resultType": "Success", "correlationId": "abcdef12-ab12-34cd-1234-abcdef123456", "callerIpAddress": "1.1.1.1", "identity": {"claim":{"http://schemas.microsoft.com/identity/claims/objectidentifier":"ea1ad6fd-e8e9-41d3-b6c8-f062fd2d36fb","http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn":"live.com#xxxx.xx.xx","http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name":"live.com#xxxx.xx.xx","appid":"abcdef12-ab12-34cd-1234-abcdef123456"}}, "properties": {"id":"https://grussexamplekv.vault.azure.net/","clientInfo":"python/3.6.6","requestUri":"https://management.azure.com/subscriptions/abcdef12-ab12-34cd-1234-abcdef123456/resourceGroups/grussBlog01/providers/Microsoft.KeyVault/vaults/GrussExampleKV?api-version=2019-09-01","httpStatusCode":200,"properties":{"sku":{"Family":"A","Name":"Standard","Capacity":null},"tenantId":"abcdef12-ab12-34cd-1234-abcdef123456","networkAcls":{"bypass":"None","defaultAction":"Allow"},"enabledForDeployment":false,"enabledForDiskEncryption":false,"enabledForTemplateDeployment":false,"enableSoftDelete":true,"softDeleteRetentionInDays":7,"enableRbacAuthorization":true,"enablePurgeProtection":null}}, "resourceId": "/SUBSCRIPTIONS/abcdef12-ab12-34cd-1234-abcdef123456/RESOURCEGROUPS/GRUSSBLOG01/PROVIDERS/MICROSOFT.KEYVAULT/VAULTS/GRUSSEXAMPLEKV", "operationVersion": "2019-09-01", "resultSignature": "OK", "durationMs": "51"}
{ "time": "2024-10-31T16:04:49.1252392Z", "category": "AuditEvent", "operationName": "VaultGet", "resultType": "Success", "correlationId": "abcdef12-ab12-34cd-1234-abcdef123456", "callerIpAddress": "1.1.1.1", "identity": {"claim":{"http://schemas.microsoft.com/identity/claims/objectidentifier":"ea1ad6fd-e8e9-41d3-b6c8-f062fd2d36fb","http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn":"live.com#xxxx.xx.xx","http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name":"live.com#xxxx.xx.xx","appid":"abcdef12-ab12-34cd-1234-abcdef123456"}}, "properties": {"id":"https://grussexamplekv.vault.azure.net/","clientInfo":"Mozilla/5.0","requestUri":"https://management.azure.com/subscriptions/abcdef12-ab12-34cd-1234-abcdef123456/resourceGroups/grussBlog01/providers/Microsoft.KeyVault/vaults/GrussExampleKV?api-version=2023-08-01-preview","httpStatusCode":200,"properties":{"sku":{"Family":"A","Name":"Standard","Capacity":null},"tenantId":"abcdef12-ab12-34cd-1234-abcdef123456","networkAcls":{"bypass":"None","defaultAction":"Allow"},"enabledForDeployment":false,"enabledForDiskEncryption":false,"enabledForTemplateDeployment":false,"enableSoftDelete":true,"softDeleteRetentionInDays":7,"enableRbacAuthorization":true,"enablePurgeProtection":null}}, "resourceId": "/SUBSCRIPTIONS/abcdef12-ab12-34cd-1234-abcdef123456/RESOURCEGROUPS/GRUSSBLOG01/PROVIDERS/MICROSOFT.KEYVAULT/VAULTS/GRUSSEXAMPLEKV", "operationVersion": "2023-08-01-preview", "resultSignature": "OK", "durationMs": "38"}
{ "time": "2024-10-31T16:04:44.1125905Z", "category": "AuditEvent", "operationName": "VaultGet", "resultType": "Success", "correlationId": "abcdef12-ab12-34cd-1234-abcdef123456", "callerIpAddress": "1.1.1.1", "identity": {"claim":{"http://schemas.microsoft.com/identity/claims/objectidentifier":"ea1ad6fd-e8e9-41d3-b6c8-f062fd2d36fb","http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn":"live.com#xxxx.xx.xx","http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name":"live.com#xxxx.xx.xx","appid":"abcdef12-ab12-34cd-1234-abcdef123456"}}, "properties": {"id":"https://grussexamplekv.vault.azure.net/","clientInfo":"Mozilla/5.0","requestUri":"https://management.azure.com/subscriptions/abcdef12-ab12-34cd-1234-abcdef123456/resourceGroups/grussBlog01/providers/Microsoft.KeyVault/vaults/GrussExampleKV?api-version=2023-08-01-preview","httpStatusCode":200,"properties":{"sku":{"Family":"A","Name":"Standard","Capacity":null},"tenantId":"abcdef12-ab12-34cd-1234-abcdef123456","networkAcls":{"bypass":"None","defaultAction":"Allow"},"enabledForDeployment":false,"enabledForDiskEncryption":false,"enabledForTemplateDeployment":false,"enableSoftDelete":true,"softDeleteRetentionInDays":7,"enableRbacAuthorization":true,"enablePurgeProtection":null}}, "resourceId": "/SUBSCRIPTIONS/abcdef12-ab12-34cd-1234-abcdef123456/RESOURCEGROUPS/GRUSSBLOG01/PROVIDERS/MICROSOFT.KEYVAULT/VAULTS/GRUSSEXAMPLEKV", "operationVersion": "2023-08-01-preview", "resultSignature": "OK", "durationMs": "30"}
To extract the callerIpAddress meant that a script was required to navigate through the folders (recursively) then read each file and extract the information we needed.

$logsDirectory = '<< location of directory >>'

# Loop through each folder and output the full path of any *.json files.
Function Get-JsonFile {
    Param ($jsonDirectory)
    
    Get-ChildItem -Path $jsonDirectory | ForEach-Object {
        # Write-Output $_.Name
        If ($_.Name -like "*.json") {
            $_.FullName
        } ElseIf ($_.PSIsContainer) {
            Get-JsonFile -jsonDirectory $_.FullName
        } 
    }
}

# Treat each row as a json document and output the callerIpAddress
Function ExtractIPFromJsonFile {
    Param ($jsonFileName)

    $jsonFile = Get-Content $jsonFileName
    ForEach ($row in $jsonFile) {
        $jsonRow = $row | ConvertFrom-Json
        Write-Output $jsonRow.callerIpAddress
    }
}

# Get a list of all of the files using the above function
$files = Get-JsonFile $logsDirectory
ForEach ($file in $files) {
    $IPs = ExtractIPFromJsonFile -jsonFileName $file
    # Add the IP addresses found to a file.
    Add-Content -path "$logsDirectory/IPs.txt" -Value $IPs
}


# Now all IPs are written to file
# read it back in and get distinct IPs (and counts)
$allIPs = Get-Content -path "$logsDirectory/IPs.txt" | Group-Object
$allIPs | ForEach-Object {
    [PSCustomObject]@{
        Line = $_.Name
        Count = $_.Count
    }
} | Sort-Object -Property Count -Descending

    This will also output the IP addresses with the number of times they have connected to the KeyVault. Note: This doesn't check to see if the requests were successful or not! Not ground breaking but might be useful for someone (or me!) in the future if you need to dive into logs stored in a storage account.

Thursday, 23 December 2021

Using templates in a Azure YAML pipeline

I was recently asked by a colleague how to use templates within a YAML pipeline, as they wanted to template a part of the deployment, this was because they have the option to deploy to different Azure App Services for testing.

To do this we created a simple dummy pipeline:

trigger:
main

stages:
  - stageBuild
    jobs:
    - jobCompilation
      pool:
        vmImageubuntu-latest

      steps:
      - scriptecho Build build build!
        displayName'Compile code'

  - stageTest01
    jobs:
    - jobDeployToTest01
      pool:
        vmImageubuntu-latest

      steps:
      - scriptecho Steps to deploy to Test01
        displayName'Deploy'

    - jobRunTestsOnTests01
      dependsOnDeployToTest01
      pool:
        vmImageubuntu-latest

      steps:
      - scriptecho Tests on Tests01
        displayName'Run tests'

This shows the initial stage which would be used to build the code and another stage that would be for deploying to the App Service.

Everything in the Test01 stage needs to be duplicated for other test environments but ideally we didn't want to bloat the pipeline with a lot of duplication.  Also, as Test01 is a deployment phase that should be updated as well.

We created a new YAML file in the repo called azure-environment.yaml:

parameters:
nameenv
  typestring 
  defaultfalse

stages:
  - stageAzure${{ parameters.env }}
    dependsOnBuild
    jobs:
    - deploymentDeployTo${{ parameters.env }}Dev
      environment${{ parameters.env }}
      pool:
        vmImageubuntu-latest
      strategy:
        runOnce:
          deploy:
            steps:
            - scriptecho Deploy to ${{ parameters.env }} Dev
              displayName'Deploy'

    - jobRunTestsOn${{ parameters.env }}Dev
      dependsOnDeployTo${{ parameters.env }}Dev
      pool:
        vmImageubuntu-latest

      steps:
      - scriptecho Tests on ${{ parameters.env }}Dev
        displayName'Run tests'


The first four lines show that a parameter is expected called env, this is then used in the tasks to create the stage and deployment name.


This can then be used by updating the main yaml script:

trigger:
main

stages:
  - stageBuild
    jobs:
    - jobCompilation
      pool:
        vmImageubuntu-latest

      steps:
      - scriptecho Build build build!
        displayName'Compile code'

  - templateazure-environment.yaml
    parameters:
      envTest01

  - templateazure-environment.yaml
    parameters:
      envTest02

This means that the template block can be easily added to branches if for a period of time the code needs to be deployed to another test environment.


As the deployment uses an environment these would need to be configured in Azure DevOps but it doesn't have to contain anything, although it does provide the functionality for approvals which could be useful if used for higher end deployments (such as PreProd and Production).


To create an environment simply select environments:


Then follow the steps and create an empty one with the names (in our case Test01 and Test02).
If you are interested in having someone approve the deployment then use the 'Approvals and Checks' to add a group.


Saturday, 27 November 2021

Mining Monero coin using Docker and K8s!

 Following on from the previous blog post where we installed and configured Docker Desktop and enabled Kubernetes (K8s), I thought I'd play around with mining a digital currency.  There are so many digital currencies around I went for one that doesn't require a massive computer to do this, so I went for Monero which can be mined using a Raspberry Pi.

Now mining Monero isn't going to make you rich unless you've got a room full of hefty powered computers but it can help you understand a bit about how Docker and Kubernetes work and make a penny or two in the process.

Creating the Dockerfile

To do this we are going to create our own Docker container which I've based on the Alpine image of linux.  The main reason for this is that it is small, lightweight and perfect for what we need.
Create a new directory somewhere on your machine and create a new file called dockerfile, with no extension.
Open the file in your favourite text editor and add the following line:

FROM alpine:latest

This will tell Docker when we build our container that we want to use the latest version of the Alpine image on the Docker hub, it's under 3mb - not bad for an operating system!

For this container instead of downloading the latest version to make it a bit more of a challenge and to show what can be achieved in a Docker file we're going to clone the git repository and build the code.

To do this, now we need to update the package manager so that it is up to date and install some of the tools that we need, so add the following to the file:

RUN apk update
RUN apk add git build-base cmake libuv-dev libressl-dev hwloc-dev 

Great, at this point our image will have Alpine linux along with some tools installed, now lets clone the git repo of the miner.

RUN git clone https://github.com/xmrig/xmrig.git

When we build the image this will use git which was installed earlier, then connect to GitHub and download all of the files.
To build the code I followed the instructions in the xmrig documentation appending the RUN command.

WORKDIR xmrig
RUN mkdir build
WORKDIR build
RUN cmake ..
RUN make -j$(nproc)

Now we're almost done, at this point when we build the image it will download Alpine linux, install some tools, download the code from the GitHub repo and then build it.

Monero Wallet

To be able to mine any digital currency you need to have somewhere to store it, this is called a digital wallet.
If you don't have a Monero wallet browse to their website and download the GUI Wallet.  Install this (you may have add approval rules for your anti-virus).
Once installed run the program and follow the steps, select a Simple Mode and Create a New Wallet.  This will ask you for a name and a location, it will also have a mnemonic seed.  It is very important that this is stored somewhere safe as without it you won't be able to use your wallet!  Have a printed out copy as well as storing it in a password safe is a good idea.  Finally create a (secure password) and store that in your password safe.
Once you've done this Monero will need to synchronise which will take a few minutes (don't panic).
Now you can click on Account and finally click the icon to copy the Primary Account address, this is the ID of your Monero wallet.


Mining Pools

To increase your chances of earning money from a digital currency people group together into a pool, these will then give you a percentage of the revenue generated depending on how much your computer helped.
I've used a pool called Monero Ocean but there are other options available which can be found by using the xmrig configuration wizard.


The final step is to add the line to the Dockerfile to start the miner:
CMD ./xmrig -o gulf.moneroocean.stream:10128 -u <Wallet ID>
Replacing <Wallet ID> with the one copied from the Monero GUI Wallet.

Your complete Dockerfile should look like this:

FROM alpine:latest
RUN apk update
RUN apk add git build-base cmake libuv-dev libressl-dev hwloc-dev

RUN git clone https://github.com/xmrig/xmrig.git

WORKDIR xmrig
RUN mkdir build
WORKDIR build
RUN cmake ..
RUN make -j$(nproc)

CMD ./xmrig -o gulf.moneroocean.stream:10128 -u <Wallet ID>

Build the Docker Image

To build the image simply open a command prompt and set the directory to be where you created the dockerfile.

docker build -t monero .

This will build the image and give it a tag of the name monero.  You can of course change this to be anything you like.

Once it is built you can view the image by typing

docker images

It should look like this:



Now we can start the container by running:

docker run -it monero

The switch -it will start the container interactively, allowing us to see the output.  It should look like this:















Wahoo, we've built a container from scratch that will download the code from github, build it and then run the code to generate Monero.

Kubernetes

To take this to the next stage let's deploy this to our Docker Desktop Kubernetes installation.

To deploy this to K8s we need a yaml file, this file describes to k8s what it needs to deploy our container.
Create a new file (I called mine monero.yaml) in the same location as the dockerfile and add the following:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: monero
  namespace: monero
spec:
  replicas: 2
  selector:
    matchLabels:
      app: monero
  template:
    metadata:
      labels:
        app: monero
    spec:
      containers:
        - name: monero
          image: monero:latest
          imagePullPolicy: IfNotPresent
          resources:
            limits:
              memory: 4096Mi
              cpu: "1"
            requests:
              memory: 4096Mi
              cpu: "0.5"

Some things to point out:

  replicas: 2

This is the number of instances we want in our cluster.  If you are running this locally you may be limited by the amount of memory your computer has.

          image: monero:latest
This is the name of the image we build locally, if you called this something different it will need to be updated here.
          imagePullPolicy: IfNotPresent
Because our image is local and not publicly available we need it to check locally.

At the bottom of the file are the resources, these don't need to be specified but I've found that it needs around 4GB of RAM, if it doesn't have enough the container will be killed and restarted.

Now we're almost there, before we can deploy this we need to create the namespace for our deployment, as specified in the file this is called monero.

Open a PowerShell window and type:

kubectl config current-context

This should state:
docker-desktop

If it doesn't list the contexts:
kubectl config get-contexts

Then select your cluster:
kubectl config use-context docker-desktop

To create the namespace, which we called monero type:
kubectl create namespace monero

Now to deploy our image to our cluster creating 2 replicas type:
kubectl apply -f monero.yaml

Which should respond
deployment.apps/monero configured

To view the status you can describe the pod with this command:
kubectl describe pod --namespace monero

If you've installed the dashboard (see my previous post) you should be able to see what is happening with a GUI.  Run the command:
kubectl proxy

Enter the token to login which can be obtained using this command:
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

Make sure you select the monero namespace and you can see the pods that are running and mining!












Viewing your earnings!

This isn't going to make you rich but you can view the amount of money you have earned on the Monero Ocean website.
Simply paste in your wallet ID and it will provide a run down of how much you have contributed and earned.  Don't expect to earn more than a few pence per day!

Monday, 26 July 2021

Using Kubernetes on Docker for Windows

 Kubernetes is the industry standard tool for hosting containers with Azure and AWS both hosting their own platforms for this.  But what if you want to test it locally (and you're on Windows), then Docker for Windows has got you covered...


Install Docker for Windows

Whilst Docker isn't the only option for managing containers, it is probably the most common, it can be installed from the official Docker website.  I recommend going through the steps and set it up to use the Windows Subsystem for Linux (WSL2), I imagine it will work fine using a Hyper-V image but WSL2 will be quicker and it is the way that I configured my machine.


Once you've got Docker setup and working you'll be able to run some Docker commands.

To check everything is setup correctly type

Docker version

Into a PowerShell window and you should see something like this:

Client:
 Cloud integration: 1.0.17
 Version:           20.10.7
 API version:       1.41
 Go version:        go1.16.4
 Git commit:        f0df350
 Built:             Wed Jun  2 12:00:56 2021
 OS/Arch:           windows/amd64
 Context:           desktop-linux
 Experimental:      true
Server: Docker Engine - Community
 Engine:
  Version:          20.10.7
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       b0f5bc3
  Built:            Wed Jun  2 11:54:58 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.6
  GitCommit:        d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc:
  Version:          1.0.0-rc95
  GitCommit:        b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0


Now that Docker is setup you can setup you will need to enable Kubernetes.


Click on the 'settings cog', then Kubernetes, then finally click 'Enable Kubernetes'


Click Save and Restart.

A message will appear stating that an internet connection is required and that it may take some time.

Soon you may notice a new icon in the bottom of the window Docker window:


At this point we've got Docker and Kubernetes installed, to confirm this run the command:

kubectl cluster-info

In the information returned you should see

- context:
    cluster: docker-desktop
    user: docker-desktop
  name: docker-desktop


This is because when Kubernetes is installed it creates this context for you.

Now to be sure we are using the correct context type:

kubectl config use-context docker-desktop

It should respond with:

Switched to context "docker-desktop".

Now we can list the namespaces and list the pods:

kubectl get namespace


Which shows:

NAME              STATUS   AGE
default           Active   27s
kube-node-lease   Active   28s
kube-public       Active   28s
kube-system       Active   29s

Then to see the pods:

kubectl get pods


No resources found in default namespace.


Okay, so it is empty and there is nothing running.  So let's install a dashboard!

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml

You'll see the output of this command as:

serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

Now we need to get a token before we can log into the dashboard (it is possible to enable a skip login option but for security we'll create a token.  This is documented in the kubernetes github pages but the process is:

Open up your favourite text editor and create two files:

ClusterRoleBinding.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

ServiceAccount.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

With those two files created they need to be merged into the config, to do that run (pointing to where you saved the files):

kubectl apply -f .\ClusterRoleBinding.yaml
kubectl apply -f .\ServiceAccount.yaml

Now to get the token that you need run:

kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

It will return a long string, this is the token:

eyJhbGciOiJSUzI1NiIsImtpZCI6IllPLTlwRmtaOUJwanhUczNtM0J0a2M5REl2eGlweGI0bzdQRzZJcG5VT3MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmV....

Then finally, to login to the dashboard we need to run:

kubectl proxy 

Then browse to the dashboard URL:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Paste the token and click Sign In:



Changing the namespace (the dropdown box next to the Kubernetes logo) to kubernetes-dashboard will display the pods that is running the dashboard:


The final step that you may want to do is to add the Metrics Server, this will allow you to see memory and cpu usage for the pods.

To do this we need to install it:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

It will give the output:

serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created 

Before we can run this we need to make a slight change.  Out of the box the dashboard will only work with HTTPS connections, as we are running this locally we need to add the flag --kubelet-insecure-tls, for more information look at their github page.


kubectl patch deployment metrics-server -n kube-system --type 'json' -p '[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--kubelet-insecure-tls"}]'

Now to view the graphs log into the dashboard again:

kubectl proxy


Note: You may need to wait a few minutes for the CPU usage and Memory Usage graphs to appear and populate.


Now you've got Kubernetes all setup and working locally.

If for any reason you want to revert the system back to the starting state and go from the beginning, click on the Docker icon, the Settings cog and Kubernetes (this is the same place where Kubernetes was enabled), then click the 'Reset Kubernetes Cluster' option.  This will remove all the pods and namespaces and put you back to the beginning.



Enjoy!


Wednesday, 20 January 2021

Docker container time drift using WSL2

I recently came across an issue that my Ubuntu Docker containers were failing restoring packages and this was due to them having a different time than my Windows 10 laptop.

After Googling the solution suggested was to reboot my laptop but as I'd just turned it on and got everything setup this wasn't something that I wanted to do.

The commands that most people suggested was to run a command in the container to re-synchronise the time with the host but the command returned an error when I tried this.

Eventually I found a github issue which implied it was a bug with Windows Subsystem for Linux.

Thankfully to re-synchronise the time it was quite simple, just run this command from a PowerShell window:

wsl --shutdown

Docker Desktop will quickly inform you that it isn't working and suggest you start it.

Once it is started again everything was back in sync and I could restore packages again!

Tuesday, 7 April 2020

NuGet Restore failing in Azure with Error parsing solution file


I recently came across a problem where builds were failing in Azure DevOps when performing a NuGet restore for the solution.

The error details were:

2020-04-07T08:05:03.8535680Z [command]C:\hostedtoolcache\windows\NuGet\4.1.0\x64\nuget.exe restore d:\a\1\s\MyProject\MyProject.sln -Verbosity Detailed -NonInteractive -ConfigFile d:\a\1\Nuget\tempNuGet_41515.config
2020-04-07T08:05:05.3883943Z NuGet Version: 4.1.0.2450
2020-04-07T08:05:05.3886378Z MSBuild auto-detection: using msbuild version '16.5.0.12403' from 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\bin'. Use option -MSBuildVersion to force nuget to use a specific version of MSBuild.
2020-04-07T08:05:05.4539665Z System.AggregateException: One or more errors occurred. ---> NuGet.CommandLine.CommandLineException: Error parsing solution file at d:\a\1\s\MyProject\MyProject.sln: Exception has been thrown by the target of an invocation.
2020-04-07T08:05:05.4540531Z at NuGet.CommandLine.MsBuildUtility.GetAllProjectFileNamesWithMsBuild(String solutionFile, String msbuildPath)
2020-04-07T08:05:05.4541882Z at NuGet.CommandLine.RestoreCommand.ProcessSolutionFile(String solutionFileFullPath, PackageRestoreInputs restoreInputs)
2020-04-07T08:05:05.4542419Z at NuGet.CommandLine.RestoreCommand.d__37.MoveNext()
2020-04-07T08:05:05.4542827Z --- End of stack trace from previous location where exception was thrown ---
2020-04-07T08:05:05.4543213Z at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
2020-04-07T08:05:05.4543673Z at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
2020-04-07T08:05:05.4544134Z at NuGet.CommandLine.RestoreCommand.d__30.MoveNext()
2020-04-07T08:05:05.4544520Z --- End of inner exception stack trace ---
2020-04-07T08:05:05.4545738Z at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
2020-04-07T08:05:05.4546231Z at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
2020-04-07T08:05:05.4546606Z at NuGet.CommandLine.Command.Execute()
2020-04-07T08:05:05.4546965Z at NuGet.CommandLine.Program.MainCore(String workingDirectory, String[] args)


I then ran a build that had run successfully before (I ran it against the same commit) and that had the same error, pointing me in the direction of the Azure hosted agent being the issue.
I then was able to confirm that the Azure agent had been updated to version 20200331.1 (this can be found in the Initialize Job step of the build).
After checking the GitHub repo for the build agent it confirmed that Visual Studio 2019 had been updated on that version of the agent.

After some research I realised that the version of NuGet.exe it was using was quite old and that NuGet should ideally match the version of Visual Studio (and more importantly MSBuild) you are using:
  • 4.1 of NuGet.exe matches Visual Studio 2017 Update 1 (15.1)
  • 4.7 of NuGet.exe matches Visual Studio 2017 Update 7 (15.7)
  • 5.0 of NuGet.exe matches Visual Studio 2019 (16.0)
  • 5.4 of NuGet.exe matches Visual Studio 2019 (16.4)
So in my case running NuGet.Exe version 4.1 to restore a Visual Studio 2019 project isn't a good idea.

To resolve this issue add a new task to your Build pipeline (NuGet Tool Installer) and set it restore a newer version of NuGet: For a YAML pipeline add:
- task: NuGetToolInstaller@1
  inputs:
    versionSpec: '5.x'


Or for the GUI type:



This will then insure that you are using the correct version of NuGet which should stop that error at least!

Hope that helps!

Friday, 13 September 2019

Using the Pi-Hole with Windows

If you haven't heard of the Pi-Hole it is a great tool.  It is a DNS server (actually it is more than just that) which can run on a Raspberry Pi that simply blocks out adverts while you're browsing the web.  While some Ad-Blockers are browser add-ons this takes a different view, it simply stops the adverts from being loaded before they reach the browser.
Effectively setup the Pi-Hole on a Raspberry Pi then update the DNS settings on your Router so that it uses the Pi-Hole and then all devices connected won't see an advert as every time one is attempted to be loaded the Pi-Hole handles the request.  It's great.

But what about your laptop?  They're meant to be taken with you, so you'll see adverts when your elsewhere.

Thankfully the Pi-Hole also offer Docker images, meaning all that you require is Docker For Windows to be installed on your laptop.

So what do you need to do?
  1. Install Docker For Windows.  I'm not going to detail all of the steps but the Pi-Hole image requires a Linux container (which is handy given the size of Windows containers).  Downloading Docker For Windows requires you to create an account (or login) to Docker.
  2. Ensure that Windows containers and not the default, as to set it up we need to embrace Linux.
  3. Download the Pi-Hole image, to do this open PowerShell and run:
    Docker Pull pihole/pihole 
  4. This will take a couple of minutes (not long) to download the Linux container with the Pi-Hole installed.
  5. Create the following directories on your machine:
    • C:\pihole\
      C:\pihole\pihole
      C:\pihole\dnsmasq.d
    These are locations that the PiHole image will use to store files that will remain (for when upgrading the container to a newer version of PiHole)
  6. Run the following command to start the Pi-Hole image:
    docker run -d --name pihole -p 53:53/tcp -p 53:53/udp -p 80:80 -p 443:443 -v "c:/pihole/pihole/:/etc/pihole/" -v "c:/pihole/dnsmasq.d/:/etc/dnsmasq.d/" -e WEBPASSWORD=vRz0n36IWF --restart=unless-stopped pihole/pihole:latest
  7. I strongly suggest that you use a strong password, as the web interface to the Pi-Hole will require this to login.  You can now browse to LocalHost in a browser and you should see a page showing that the PiHole is running, although no requests are currently going to it (so it won't actually be blocking any adverts).
  8. Docker may ask you for an account to share files on your C drive (or wherever you placed them).
  9. Finally, you need to update the DNS setting for your connection to block adverts.  To do this
  10. In File Explorer right click on Network and select Properties
  11. Click on your connection
  12. Select Properties in the dialog
  13. Then select TCP/IPv4 and then properties
  14. Then set the DNS settings to be 192.168.0.1 (as the Pi-Hole container is running on your laptop).
  15. Click Ok to dismiss the dialog boxes and you're done.
  16. To see the interface for PiHole type localhost into a browser.  Click on Login and enter the password (in my example vRz0n36IWF but please change it!).  
  17. Adverts are now being blocked!