Categories
Personal Software Technical

Code Less, and Communicate More

AI is no longer just a tool for specialists; it’s becoming a co-pilot for every engineer and innovator today. Tools like GitHub Copilot and Claude among others are churning out code faster than ever, handling boilerplate tasks and even suggesting optimizations. But here’s an insight many people are sleeping on: as AI democratizes software development, […]

Categories
How-To Software Technical

Ollama Serve: Your Guide to Personal LLM Access

Introduction: Unlock the Power of Ollama Serve In an era where artificial intelligence is democratized, Ollama Serve emerges as a revolutionary platform offering access to large language models (LLMs) locally. This guide walks you through every step—from installation to utilization—ensuring you harness its full potential. What is Ollama Serve? Ollama Serve is more than just […]

Categories
Software Technical

Free Perplexity Pro

I know you’ve been interested in upgrading your AI experience. Self hosted or even with one of the many paid subscriptions. Well pay attention friends (Geek Hat’s on). Now is your chance to use the Perplexity Pro service for 1 year of free AI chat box, search tool, etc. All you need is your xfinity […]

Categories
Software Technical

Moving from Bitbucket to GitHub for your CICD

I am sharing only because I am thinking about this, documenting, and trying to under what an organization might consider when planning a move from Bitbucket to GitHub. From a Jenkins automation point of view. Here are some key aspects to be aware of. API Differences: Workflow Differences: Additional Considerations: Resources to help you: Remember, […]

Categories
How-To Software Technical

Large Language Models (LLM) Running @ Home

Running large language models (LLMs) on home hardware can be a challenging task due to the significant computational resources required by these models. However, with the right setup and configuration, it is possible to train and run LLMs on your personal computer or laptop. The first step in running an LLM on your home hardware […]

Categories
How-To Software Technical

Ways to run ollama as a service

There are several ways you can run Ollama as a service, but one of the most popular options is using Google Cloud Run. This platform allows you to deploy and run containerized applications on-demand without managing infrastructure. You can use Docker containers to package and deploy your Ollama model, and then use Cloud Run to […]