Blog

Azure Data Platform

Learnings, findings and fun of an Azure Data Platform Consultant

I have presented at Power BI Fest on November 20th 2021. It was an honor to be part of a conference with many great speakers.

For the resources used in the presentation and more details please have a look at my previous blog post.

You find the deck here: 201910SQLSaturdayDevOps.pdf.

I will be speaking at the Power BI Fest next weekend.

The topic will be how to analyze your Azure Cost with Power BI. This gives many benefits compared with the native functionality in the Azure portal, for example speed and options to customize.

The result of this can look like the following:...

Continue Reading

I have presented at The Data Blaster Community Conference on October 15th 2021. It was an honor to be part of a conference with many great speakers.

This was based on my previous blog post on chocolatey. Moreover, based on the received feedback I have added my investigations of winget to the...

Continue Reading

To work with an Azure Data Platform efficiently there is many tools that one should be aware of. However, installing them and keeping them current is a tedious job. In this post, I will describe which tools are needed for developer of a standard data platform on Azure with a focus on Data Warehousing. Moreover, I will explain how to install and keep everything updated with Chocolatey.

Continue Reading

On Azure most data services offer a firewall. Unfortunately, at the moment the details of those firewalls differ. As soon as a firewall is switched on for any storage service (e.g. Azure Data Lake Gen2, Azure Synapse, Azure Key Vault), Azure Data Factory cannot access the resources by default anymore and must be configured accordingly.

In this blog post, I want to demonstrate how to demonstrate how to connect ADF and Synapse pretty securely without going with a full Managed VNet Runtime of ADF which would incur extra cost.

Continue Reading

Yesterday Microsoft has announced and made available a new type of ingestion. It is named the metadata-driven copy task. With this new task, one can within minutes create a pipeline that loads data from a wide variety of data sources into a many of the ADF supported data sources.

Continue Reading