One of the latest AI/ML chips to appear was revealed at last week’s Hot Chips 34 conference. Untether AI’s founder Martin Snelgrove and VP of Product Bob Beachler rolled out their company’s speedAI chip family, code-named named Boqueria, named after a public market in Barcelona.
The speedAI240 chip is designed to serve as an add-on AI accelerator for a more conventional CPU and connects to the CPU via a PCIe Gen5 host interface. If 2 petaFLOPS of computing acceleration is not sufficient, or if one speedAI240 chip’s quarter Gigabyte of on-chip memory isn’t adequate to hold the neural network of interest, it’s possible to daisy-chain speedAI240 chips in a string or a mesh to tackle the larger problem.
Singh’s early experience with AI was at Ford where he spent four and a half years working on autonomous vehicles . While working at Ford, he switched from engineering to business strategy, got an MBA form Harvard, and then joined Untether AI because the personal itch he’s trying to scratch is to take part in the commercial rollout of a cutting-edge technology. Today, that technological cutting edge certainly includes AI chips.
During my four-year stint in the AV industry, I was responsible for training deep neural networks such as object detection and tracking algorithms, as well as developing intellectual property and making presentations at international industry summits. As an AI researcher and practitioner, I focused on keeping abreast of cutting-edge sensors, embeddings, models, and training techniques, while not focusing a lot on how my hardware worked in the back end.
Untether AI’s speedAI chip family exploits these characteristics with a new sort of architecture called “at-memory computing.” The first-order design goal of this architecture is to minimize data movement, which led to the development of an efficient, low-latency, and high-performance inference accelerator: the speedAI240 chip.