As enterprises rapidly adopt AI to improve efficiency, customer experience, and innovation, the choice of model architecture has become a critical factor. Whether it’s deploying a massive Large Language Model (LLM), an efficient Very Large Language Model (VLLM), or a compute-friendly Small Language Model (SLM), organisations are increasingly strategic about balancing performance, cost, and accuracy. […]

Read More

Not long ago, I wrote about why Retrieval-Augmented Generation (RAG) is such a pivotal architecture in modern AI workflows, particularly when compared to fine-tuning and training from scratch. The core argument was simple: RAG enables models to stay up-to-date, grounded, and efficient without massive retraining costs. It was (and still is) a pragmatic solution to […]

Read More

Artificial Intelligence (AI) is transforming industries, but deploying AI workloads efficiently remains a challenge. Many organisations look to virtualisation to maximise resource utilisation, improve security, and streamline AI infrastructure management. This blog explores how to deploy AI workloads in virtualised environments using VMware Virtualised vSphere for AI (VVF), Private AI on VMware Cloud Foundation (VCF), […]

Read More

Artificial Intelligence (AI) is undergoing a rapid transformation, driven by advancements in hardware and software. Today, AI relies heavily on high-performance computing (HPC), GPUs, TPUs, ASICs, and optimised software frameworks. However, as AI models become more complex, the limits of current technology become apparent. This raises an important question: will the AI infrastructure we rely […]

Read More