TensorFlow
Dihuni ships optiReady GPU servers for generative AI, LLM applications
Dihuni announced that it has started shipping optiReady GPU (graphics processing unit) servers and workstations designed for generative AI (artificial intelligence) and LLM (large language model) applications. These pre-configured systems are designed to make generative AI infrastructure selection simple and accelerate deployment from procurement to running applications.
Read moreStreaming for ‘extremely fast’ event processing in IoT, edge and cloud environments simplified by Hazelcast Jet
In-memory computing platform company, Hazelcast has unveiled Hazelcast Jet, said to be the only streaming engine with no external system dependencies. According to the company, the result is the industry’s fastest stream processing engine that dramatically simplifies implementation from the smallest to largest deployments.
Read moreRenesas Electronics and Codeplay collaborate on OpenCL™ and SYCL™ for ADAS solutions
Renesas Electronics, a supplier of advanced semiconductor solutions, and Codeplay Software Ltd., experts in high-performance compilers and software optimisation for multi-core processing, announced their collaboration to deliver ComputeAorta™, Codeplay’s OpenCL open standard-based software framework for Renesas R-Car system-on-chips (SoCs).
Read moreNew Quick Start Solution from MapR accelerates deep learning application deployments
MapR Technologies, provider of the Converged Data Platform that enables organisations to create intelligent applications that fully integrate analytics with operational processes in real time, announced at Strata London a new Quick Start Solution (QSS) focusing on deep learning applications.
Read more