For the past few years, the world has been hit by a storm of AI-generated information, mostly using generative pre-trained transformer (GPT) models to perform the AI inferencing. These models are excellent at performing large language model (LLM) requests, but they do have one drawback. The response time or lag is noticeable. This is largely due to the hardware that these models are being processed on, namely GPUs.
Many of the GPUs running AI models in professional data centres are Nvidia’s A100 (or similar) series which contain thousands of CUDA cores, many more than the handful of processing cores in a standard CPU. These CUDA cores work together to answer the language requests directed at them as they are designed for parallel processing and are optimised for tasks like scientific simulations. But are they really optimised?
Groq seems to be the new kid in all this AI hoopla, but they have been around since 2016 when the company was founded by a group of former Google employees led by Jonathan Ross, one of the designers of the Tensor Processing Unit (TPU), and Douglas Wightman, an engineer at Google X. The TPU is an AI accelerator Application-Specific Integrated Circuit (ASIC), a custom-designed chip tailored for a specific task. These ASICs offer optimised performance and efficiency compared to general-purpose processors.
And this is where the story gets exciting. The Groq AI model runs on ASICs as opposed to GPU architecture to deliver similar responses to the current slew of GPT models in use. Groq’s architecture is developed to expedite machine learning workloads, providing unparalleled speed and efficiency. This is a big deal – Groq needs much less energy to answer these same requests, and more importantly, does it with seemingly no lag. This last property is down to the speed at which the ASICs perform these ‘application-specific’ tasks.
Real-world testing by myself bears this out. When asking exactly the same technical question to both chatGPT 3.5 and 4.0 models and also to Groq, and then comparing the response times, I can without a doubt say that Groq certainly has minimal lag compared to the GPT models. The information produced in the responses is presented in a different format, but compares favourably with each other. Groq’s response is almost immediate whereas the other models take a few seconds before beginning to display an answer.
The introduction of Groq’s ASIC-based approach to AI inferencing marks a significant shift in the landscape of LLMs. By prioritising speed and efficiency, Groq is challenging the current dominance of GPU-driven AI, offering near-instantaneous responses while consuming less power. As AI applications continue to expand, this technology could redefine the way we interact with AI systems, setting a new benchmark responsiveness.
Whether this signals a broader industry shift remains to be seen, but one thing is clear – Groq has introduced a compelling alternative that demands attention.
Tel: | +27 11 543 5800 |
Email: | malckey@technews.co.za |
www: | www.technews.co.za |
Articles: | More information and articles about Technews Publishing |
© Technews Publishing (Pty) Ltd | All Rights Reserved