Mark Zuckerberg
, the CEO of Facebook parent
Meta
, recently announced that he wants to build an open-source
artificial general intelligence
for everyone. He added that he will undertake the project responsibly and for that he is ready to
spend
billions. But how many billions?
Zuckerberg said in an Instagram Reels post earlier this week that by the end of 2024, Meta's "massive" computing infrastructure will include 350,000 H100
graphics cards
that are made by
Nvidia
.
While he did not share the number of GPUs the company has purchased to develop the already launched AI models, his estimated number of graphics cards needed for the future may provide an approximate amount of money he has to spend on buying them.
As per a report by CNBC, Nvidia is selling the
H100
for $25,000 to $30,000, and on eBay they can cost over $40,000. If it is considered that Meta is paying at the low end of the price range, the total amounts to nearly $9 billion.
Meta’s chief scientist Yann LeCun last month highlighted the importance of GPUs in developing AGI. ″[If] you think AGI is in, the more GPUs you have to buy,” LeCun said at the time.
It must be noted that Meta’s compute infrastructure will also contain “almost 600k H100 equivalents of compute if you include other GPUs.” In December, tech companies like Meta, OpenAI and Microsoft said they would use the new Instinct MI300X AI computer chips from AMD.
Meta's future plans
In Meta’s third-quarter earnings report, Meta said that total expenses for 2024 will be in the range of $94 billion to $99 billion. “In terms of investment priorities, AI will be our biggest investment area in 2024, both in engineering and computer resources,” Zuckerberg said.
Apart from Meta, ChatGPT maker OpenAI and Google DeepMind are also researching AGI -- a form of AI that’s touted to be comparable to human-level intelligence.