Connect with us

Hardware

New Deepseek model drastically reduces resource usage by converting text and documents into images — ‘vision-text compression’ uses up to 20 times fewer tokens

Published

on

[ad_1]

Chinese developers of Deepseek AI have released a new model that leverages its multi-modal capabilities to improve the efficiency of its handling of complex documents and large blocks of text, by converting them into images first, as per SCMP. Vision encoders were able to take large quantities of text and convert them into images, which, when accessed later, required between seven and 20 times fewer tokens, while maintaining an impressive level of accuracy.

Deepseek is the Chinese-developed AI that

[ad_2]

Source link

Continue Reading