Art Debono Hotel, Γουβιά, Κέρκυρα 49100

Επαγγελματική Σχολή με σύγχρονες μεθόδους διδασκαλίας

I.E.K. Κέρκυρας

26610 90030

iekker@mintour.gr

Art Debono Hotel

Γουβιά, Κέρκυρα 49100

08:30 - 15:30

Δευτέρα - Παρασκευή

I.E.K. Κέρκυρας

26610 90030

info@iek-kerkyras.edu.gr

Art Debono Hotel

Γουβιά, Κέρκυρα 49100

08:30 - 19:00

Δευτέρα - Παρασκευή

Overview

  • Founded Date November 3, 1941
  • Sectors Τουριστικά
  • Posted Jobs 0
  • Viewed 8

Company Description

Cerebras Ends up being the World’s Fastest Host for DeepSeek R1, Outpacing Nvidia GPUs By 57x

Join our daily and weekly newsletters for the latest updates and unique material on industry-leading AI coverage. Find out more

Cerebras Systems announced today it will host DeepSeek’s advancement R1 expert system design on U.S. servers, promising speeds up to 57 times faster than GPU-based services while keeping sensitive data within American borders. The relocation comes in the middle of growing issues about China’s fast AI advancement and data personal privacy.

The AI chip startup will deploy a 70-billion-parameter variation of DeepSeek-R1 working on its proprietary wafer-scale hardware, delivering 1,600 tokens per 2nd – a significant enhancement over standard GPU applications that have fought with more recent “reasoning” AI designs.

Why DeepSeek’s reasoning designs are reshaping enterprise AI

” These reasoning designs affect the economy,” said James Wang, a senior executive at Cerebras, in an unique interview with VentureBeat. “Any understanding worker basically needs to do some kind of multi-step cognitive jobs. And these thinking designs will be the tools that enter their workflow.”

The announcement follows a tumultuous week in which DeepSeek’s introduction activated Nvidia’s largest-ever market price loss, almost $600 billion, raising concerns about the chip giant’s AI supremacy. Cerebras’ solution directly addresses two key issues that have emerged: the computational needs of advanced AI designs, and data sovereignty.

” If you use DeepSeek’s API, which is popular today, that data gets sent out directly to China,” Wang discussed. “That is one severe caution that [makes] many U.S. companies and business … not ready to consider [it]”

How Cerebras’ wafer-scale technology beats conventional GPUs at AI speed

Cerebras attains its speed advantage through a novel chip architecture that keeps entire AI models on a single wafer-sized processor, eliminating the memory bottlenecks that pester GPU-based systems. The company claims its execution of DeepSeek-R1 matches or surpasses the efficiency of OpenAI’s proprietary models, while running entirely on U.S. soil.

The development represents a significant shift in the AI landscape. DeepSeek, established by previous hedge fund executive Liang Wenfeng, surprised the industry by achieving advanced AI reasoning abilities reportedly at just 1% of the cost of U.S. competitors. Cerebras’ hosting solution now uses American companies a way to take advantage of these advances while keeping data control.

” It’s really a great story that the U.S. research laboratories provided this present to the world. The Chinese took it and enhanced it, but it has limitations due to the fact that it runs in China, has some censorship problems, and now we’re taking it back and running it on U.S. information centers, without censorship, without information retention,” Wang stated.

U.S. deals with new questions as AI development goes worldwide

The service will be available through a developer preview starting today. While it will be at first totally free, Cerebras plans to carry out API access controls due to strong early need.

The move comes as U.S. legislators come to grips with the ramifications of DeepSeek’s rise, which has actually exposed prospective limitations in American trade restrictions designed to keep technological advantages over China. The ability of Chinese business to attain breakthrough AI capabilities despite chip export controls has triggered require brand-new regulatory methods.

Industry analysts suggest this advancement could speed up the shift away from GPU-dependent AI facilities. “Nvidia is no longer the leader in inference performance,” Wang noted, pointing to criteria revealing superior efficiency from various specialized AI chips. “These other AI chip companies are truly faster than GPUs for running these newest designs.”

The impact extends beyond technical metrics. As AI models significantly integrate sophisticated thinking abilities, their computational needs have actually skyrocketed. Cerebras argues its architecture is better matched for these emerging workloads, potentially reshaping the competitive landscape in business AI deployment.

If you want to impress your manager, VB Daily has you covered. We provide you the inside scoop on what companies are finishing with generative AI, from regulative shifts to useful releases, so you can share insights for maximum ROI.

Read our Privacy Policy

An error occured.

The AI Impact Tour Dates

Join leaders in enterprise AI for networking, insights, and engaging conversations at the upcoming stops of our AI Impact Tour. See if we’re pertaining to your area!

– VentureBeat Homepage
– Follow us on Facebook
– Follow us on X.
– Follow us on LinkedIn.
– Follow us on RSS

– Press Releases.
– Contact Us.
– Advertise.
– Share a News Tip.
– Add to DataDecisionMakers

– Privacy Policy.
– Regards to Service.
– Do Not Sell My Personal Information

© 2025 VentureBeat. All rights scheduled.

AI Weekly

Your weekly appearance at how applied AI is altering the tech world

We appreciate your privacy. Your email will just be utilized for sending our newsletter. You can unsubscribe at any time. Read our Privacy Policy.

Thanks for subscribing. Take a look at more VB newsletters here.