Hugging Face exposed AI tokens from major tech firms like Meta, Google; putting AI models at risk
2 min. read
Published on
Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more
Tech giants Meta, Microsoft, Google, and VMware were among the victims of a security breach on Hugging Face, a data science and machine learning platform. Exposed API tokens granted researchers access to modify datasets, steal models, and even view private models from these organizations.
What are AI tokens?
Imagine AI tokens as digital coins or badges representing the value of AI-powered products and services. These tokens can fund AI research, reward developers for creating valuable AI models, participate in AI ecosystem governance, and purchase AI-related goods and services.
In simpler terms, AI tokens are virtual currency specific to the AI world. They can be used to support AI development, encourage innovation, and make AI-related transactions easier.
The researchers from Lasso Security discovered over 1,500 exposed tokens, allowing them to access the accounts of 723 organizations. In 655 cases, the tokens had write permissions, enabling them to modify files in repositories. This put data, models, and the work of millions of users at risk.
Imagine attackers manipulating training data to produce inaccurate or harmful results. Or stealing powerful AI models, giving them access to valuable intellectual property. This is the potential impact of this breach.
This breach is a wake-up call for the AI/ML community. We must prioritize security to ensure these powerful tools are used for good, not harm.
More about it here.
User forum
0 messages