<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Matt's Blog</title><link>https://mateuszdrab.github.io/blog/</link><description>Recent content on Matt's Blog</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Sun, 02 Feb 2025 15:00:00 +0000</lastBuildDate><atom:link href="https://mateuszdrab.github.io/blog/index.xml" rel="self" type="application/rss+xml"/><item><title>GPU para-virtualization in Hyper-V</title><link>https://mateuszdrab.github.io/blog/article/gpu-paravirtualization/</link><pubDate>Sun, 02 Feb 2025 15:00:00 +0000</pubDate><guid>https://mateuszdrab.github.io/blog/article/gpu-paravirtualization/</guid><description>&lt;p>I have just received an NVIDIA Tesla P40 having sourced it for a decent price with the intent of installing it into my Hyper-V server.
It&amp;rsquo;s an older card, but it&amp;rsquo;s still a beast with 24GB of memory and 3840 CUDA cores.
The card is gaining popularity in the AI community due to its high memory capacity and relatively low price, it is also passively cooled and an official HPE optional product (according to HPE DL380 G9 specs) and is therefore recognized by iLO and it&amp;rsquo;s temperature can be monitored as well.&lt;/p></description></item><item><title>Ingesting logs into Loki</title><link>https://mateuszdrab.github.io/blog/article/ingesting-logs-into-loki/</link><pubDate>Mon, 27 Jan 2025 14:50:00 +0000</pubDate><guid>https://mateuszdrab.github.io/blog/article/ingesting-logs-into-loki/</guid><description>&lt;p>Here is the standard Loki log processing flow that I use for my logs.&lt;/p>
&lt;p>The pipeline is comprised of the following stages:&lt;/p>
&lt;ul>
&lt;li>adding &lt;code>job&lt;/code> label (so that I can query all logs ingested from files)&lt;/li>
&lt;li>add &lt;code>directory&lt;/code> label (by obtaining the directory name from the &lt;code>filename&lt;/code> label)&lt;/li>
&lt;li>packing the &lt;code>filename&lt;/code> label into the log entry using the &lt;code>stage.pack&lt;/code> stage (reducing cardinality of the labels, querying can be done by the &lt;code>directory&lt;/code> label)&lt;/li>
&lt;li>adding &lt;code>hostname&lt;/code> and &lt;code>agent_hostname&lt;/code> labels to the logs (&lt;code>agent_hostname&lt;/code> refers to the machine running the agent, &lt;code>hostname&lt;/code> is obtained from the logs. This is not implemented on my Windows Agent configuration, but it is designed for situations where agent might be handing logs from other sources, for example, syslog or event log)&lt;/li>
&lt;li>dropping the &lt;code>computer&lt;/code> label&lt;/li>
&lt;li>dropping logs older than 1 hour (aligns with server side configuration to minimize errors)&lt;/li>
&lt;/ul>
&lt;p>All file logs enter at the &lt;code>loki.relabel.file.receiver&lt;/code>, Windows event logs enter at the &lt;code>loki.relabel.default.receiver&lt;/code>.&lt;/p></description></item><item><title>Ingesting SCCM logs into Loki</title><link>https://mateuszdrab.github.io/blog/article/sccm-loki-logs/</link><pubDate>Mon, 27 Jan 2025 14:20:00 +0000</pubDate><guid>https://mateuszdrab.github.io/blog/article/sccm-loki-logs/</guid><description>&lt;p>System Center Configuration Manager (SCCM) is still in use in my lab, mostly as a means of deployment of applications and updates. Whilst I&amp;rsquo;m working on moving some of the functionality into Intune, SCCM will remain for forseeable future the update orchestrator for my server environment.&lt;/p>
&lt;p>SCCM logs seem to be standardized around two formats.
Here are the examples of them:&lt;/p>
&lt;p>&lt;code>Service is up and running.~~ $$&amp;lt;SMS_REST_PROVIDER&amp;gt;&amp;lt;01-26-2025 17:35:20.122+00&amp;gt;&amp;lt;thread=13372 (0x343C)&amp;gt;&lt;/code>&lt;/p>
&lt;p>&lt;code>&amp;lt;![LOG[Worker M365ADeploymentPlanWorker was triggered by timer.]LOG]!&amp;gt;&amp;lt;time=&amp;quot;00:02:06.5649051&amp;quot; date=&amp;quot;12-20-2024&amp;quot; component=&amp;quot;SMS_SERVICE_CONNECTOR_M365ADeploymentPlanWorker&amp;quot; context=&amp;quot;&amp;quot; type=&amp;quot;1&amp;quot; thread=&amp;quot;126&amp;quot; file=&amp;quot;&amp;quot;&amp;gt;&lt;/code>&lt;/p></description></item><item><title>My LG OLED TV is crashing</title><link>https://mateuszdrab.github.io/blog/article/lg-oled-tv-is-crashing/</link><pubDate>Fri, 03 Mar 2023 13:31:27 +0000</pubDate><guid>https://mateuszdrab.github.io/blog/article/lg-oled-tv-is-crashing/</guid><description>&lt;p>I love my LG OLED TV, I believe the picture quality is unbeatable and I could call myself an LG TV advocate. However, this is not to say they&amp;rsquo;re flawless. I like WebOS, but&amp;rsquo;s it becoming clunky and slow - there&amp;rsquo;s too many advertisements all over the place and the UI is not as responsive as it used to be - this is said based on my experiences of owning two C1 models of 55 and 65 inch size. It sometimes frustrates me that starting Disney Plus can take almost a minute if not longer when initiated via the remote shortcut when the TV is off.&lt;/p></description></item><item><title>DNS service discovery for Prometheus</title><link>https://mateuszdrab.github.io/blog/article/dns2promsd/</link><pubDate>Tue, 28 Feb 2023 15:00:00 +0000</pubDate><guid>https://mateuszdrab.github.io/blog/article/dns2promsd/</guid><description>&lt;h2 id="background-story">Background story&lt;/h2>
&lt;p>Back when I ran SCOM, in addition of Windows machine monitoring and Event Log aggregation, it was performing a duty of ping testing all the servers in my environment. This was a very useful feature as it allowed me to quickly identify servers that were down or unreachable. The main quirk was that it was only aware of the servers that I manually added and not able to discover things automatically as I didn&amp;rsquo;t have the right setup to leverage the SNMP based discovery.&lt;/p></description></item><item><title>Veeam Exporter for Prometheus</title><link>https://mateuszdrab.github.io/blog/article/veeam-exporter/</link><pubDate>Mon, 27 Feb 2023 09:54:11 +0000</pubDate><guid>https://mateuszdrab.github.io/blog/article/veeam-exporter/</guid><description>&lt;p>&lt;img src="https://github.com/peekjef72/veeam_exporter/raw/master/screenshots/veeam_general_dash.png" alt="Veeam Exporter dashboard in Grafana" title="Veeam Exporter dashboard in Grafana">&lt;/p>
&lt;p>Over the past couple of months I&amp;rsquo;ve put a considerable amount of time into deployment of a monitoring infrastructure in my home-lab that would replace Splunk and SCOM. In a way, this setup introduced a new level of monitoring which I did not have before, I&amp;rsquo;ve deeply fallen for metrics and the power of Prometheus and pretty much sunk into the Grafana&amp;rsquo;s LGTM ecosystem, quickly implementing Tempo and Loki for a full experience.&lt;/p></description></item><item><title>Hello and welcome</title><link>https://mateuszdrab.github.io/blog/article/first-post/</link><pubDate>Mon, 27 Feb 2023 09:00:00 +0000</pubDate><guid>https://mateuszdrab.github.io/blog/article/first-post/</guid><description>&lt;p>Welcome to the first post on my new blog. I will be writing about all things related to IT, mainly focusing on VMware and Microsoft technologies. I will also be writing about my personal projects and other things that I find interesting.&lt;/p></description></item></channel></rss>