I’m diving headfirst into a new project: launching a podcast! With all the interesting conversations and random thoughts swirling around in my head, it just felt like the perfect way to connect and share them with the world. Buckle up, and let’s see where this journey takes us!
In 2020, my 2017 Hyundai Sonata Sport Edition started using an excessive amount of oil. It took a frustrating year of tests and investigation at the dealership to diagnose the problem as a known oil consumption issue with this engine. Thankfully, the warranty covered the replacement of the transmission and lower engine block. Fast forward […]
I am sharing only because I am thinking about this, documenting, and trying to under what an organization might consider when planning a move from Bitbucket to GitHub. From a Jenkins automation point of view. Here are some key aspects to be aware of. API Differences: Workflow Differences: Additional Considerations: Resources to help you: Remember, […]
Perhaps not the best way to handle this but for a quick and dirty fix this might work for you. Edit /etc/docker/daemon.json Restart the docker daemon for those changes to take effect: Now when you start a container, docker will populate /etc/resolv.conf with the values from daemon.json Why did I do this? I wanted my […]

Running large language models (LLMs) on home hardware can be a challenging task due to the significant computational resources required by these models. However, with the right setup and configuration, it is possible to train and run LLMs on your personal computer or laptop. The first step in running an LLM on your home hardware […]
There are several ways you can run Ollama as a service, but one of the most popular options is using Google Cloud Run. This platform allows you to deploy and run containerized applications on-demand without managing infrastructure. You can use Docker containers to package and deploy your Ollama model, and then use Cloud Run to […]