Ideezy

Vue d'ensemble

  • Date de fondation mai 20, 1913
  • Secteurs Audit / Conseil / Juridique
  • Emplois publiés 0
  • Vu 11

Description de l'entreprise

Cerebras becomes the World’s Fastest Host for DeepSeek R1, Outpacing Nvidia GPUs By 57x

Join our daily and weekly newsletters for the latest updates and special material on industry-leading AI coverage. Find out more

Cerebras Systems revealed today it will host DeepSeek’s advancement R1 artificial intelligence design on U.S. servers, appealing accelerate to 57 times faster than GPU-based solutions while keeping delicate data within American borders. The move comes in the middle of growing issues about China’s fast AI development and data privacy.

The AI chip startup will deploy a 70-billion-parameter variation of DeepSeek-R1 running on its proprietary wafer-scale hardware, providing 1,600 tokens per second – a remarkable improvement over standard GPU executions that have actually struggled with newer « thinking » AI models.

Why DeepSeek’s reasoning models are reshaping business AI

 » These thinking models impact the economy, » stated James Wang, a senior executive at Cerebras, in an unique interview with VentureBeat. « Any knowledge worker basically needs to do some type of multi-step cognitive jobs. And these thinking designs will be the tools that enter their workflow. »

The announcement follows a tumultuous week in which DeepSeek’s introduction set off Nvidia’s largest-ever market price loss, almost $600 billion, raising concerns about the chip giant’s AI supremacy. Cerebras’ option straight attends to 2 key issues that have emerged: the computational needs of sophisticated AI designs, and information sovereignty.

 » If you use DeepSeek’s API, which is incredibly popular right now, that information gets sent straight to China, » Wang described. « That is one severe caution that [makes] numerous U.S. companies and enterprises … not going to think about [it] »

How Cerebras’ wafer-scale technology beats conventional GPUs at AI speed

Cerebras attains its speed benefit through a novel chip architecture that keeps entire AI designs on a single wafer-sized processor, eliminating the memory bottlenecks that afflict GPU-based systems. The company claims its implementation of DeepSeek-R1 matches or exceeds the performance of OpenAI’s proprietary designs, while running completely on U.S. soil.

The development represents a substantial shift in the AI landscape. DeepSeek, founded by previous hedge fund executive Liang Wenfeng, surprised the market by attaining advanced AI thinking capabilities at just 1% of the cost of U.S. rivals. Cerebras’ hosting option now provides American companies a method to utilize these advances while maintaining data control.

 » It’s really a good story that the U.S. research labs gave this present to the world. The Chinese took it and enhanced it, but it has restrictions due to the fact that it runs in China, has some censorship issues, and now we’re taking it back and running it on U.S. data centers, without censorship, without information retention, » Wang stated.

U.S. tech leadership deals with new questions as AI development goes worldwide

The service will be offered through a developer preview beginning today. While it will be initially complimentary, Cerebras strategies to carry out API gain access to controls due to strong early need.

The move comes as U.S. legislators grapple with the implications of DeepSeek’s rise, which has exposed prospective restrictions in American trade limitations designed to keep technological benefits over China. The capability of Chinese companies to achieve breakthrough AI capabilities in spite of chip export controls has actually triggered calls for new regulative techniques.

Industry experts recommend this development could speed up the shift away from GPU-dependent AI facilities. « Nvidia is no longer the leader in inference performance, » Wang kept in mind, pointing to criteria showing remarkable efficiency from different specialized AI chips. « These other AI chip companies are truly faster than GPUs for running these most current models. »

The impact extends beyond technical metrics. As AI models significantly integrate advanced thinking capabilities, their computational demands have actually escalated. Cerebras argues its architecture is better matched for these emerging work, potentially improving the competitive landscape in enterprise AI deployment.

If you desire to impress your boss, VB Daily has you covered. We give you the within scoop on what companies are doing with generative AI, from regulatory shifts to practical releases, so you can share insights for maximum ROI.

Read our Privacy Policy

A mistake took place.

The AI Impact Tour Dates

Join leaders in business AI for networking, insights, and interesting discussions at the upcoming stops of our AI Impact Tour. See if we’re concerning your location!

– VentureBeat Homepage
– Follow us on Facebook
– Follow us on X.
– Follow us on LinkedIn.
– Follow us on RSS

– Press Releases.
– Contact Us.
– Advertise.
– Share a News Tip.
– Add to DataDecisionMakers

– Privacy Policy.
– Regards to Service.
– Do Not Sell My Personal Information

© 2025 VentureBeat. All rights booked.

AI Weekly

Your weekly appearance at how applied AI is changing the tech world

We appreciate your privacy. Your e-mail will just be utilized for sending our newsletter. You can unsubscribe at any time. Read our Privacy Policy.

Thanks for subscribing. Have a look at more VB newsletters here.