DiscoverTechCrunch Startup NewsTensormesh raises $4.5M to squeeze more inference out of AI server loads; also, Palantir enters $200M partnership with telco Lumen
Tensormesh raises $4.5M to squeeze more inference out of AI server loads; also, Palantir enters $200M partnership with telco Lumen

Tensormesh raises $4.5M to squeeze more inference out of AI server loads; also, Palantir enters $200M partnership with telco Lumen

Update: 2025-10-24
Share

Description

Tensormesh uses an expanded form of KV Caching to make inference loads as much as ten times more efficient.


Plus, Palantir said on Thursday it had struck a partnership with Lumen Technologies that will see the telecommunications company using the data management company's AI software to build capabilities to support enterprise AI services.

Learn more about your ad choices. Visit podcastchoices.com/adchoices

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Tensormesh raises $4.5M to squeeze more inference out of AI server loads; also, Palantir enters $200M partnership with telco Lumen

Tensormesh raises $4.5M to squeeze more inference out of AI server loads; also, Palantir enters $200M partnership with telco Lumen

TechCrunch