in

Baidu restricts Google and Bing from scraping content for AI training


Chinese internet search provider Baidu has updated its Wikipedia-like Baike service to prevent Google and Microsoft Bing from scraping its content.

This change was observed in the latest update to the Baidu Baike robots.txt file, which denies access to Googlebot and Bingbot crawlers.

According to the Wayback Machine, the change took place on August 8. Previously, Google and Bing search engines were allowed to index Baidu Baike’s central repository, which includes almost 30 million entries, although some target subdomains on the website were restricted.

This action by Baidu comes amid increasing demand for large datasets used in training artificial intelligence models and applications. It follows similar moves by other companies to protect their online content. In July, Reddit blocked various search engines, except Google, from indexing its posts and discussions. Google, like Reddit, has a financial agreement with Reddit for data access to train its AI services.

According to sources, in the past year, Microsoft considered restricting access to internet-search data for rival search engine operators; this was most relevant for those who used the data for chatbots and generative AI services.

Meanwhile, the Chinese Wikipedia, with its 1.43 million entries, remains available to search engine crawlers. A survey conducted by the South China Morning Post found that entries from Baidu Baike still appear on both Bing and Google searches. Perhaps the search engines continue to use older cached content.

Such a move is emerging against the background where developers of generative AI around the world are increasingly working with content publishers in a bid to access the highest-quality content for their projects. For instance, relatively recently, OpenAI signed an agreement with Time magazine to access the entire archive, dating back to the very first day of the magazine’s publication over a century ago. A similar partnership was inked with the Financial Times in April.

Baidu’s decision to restrict access to its Baidu Baike content for major search engines highlights the growing importance of data in the AI era. As companies invest heavily in AI development, the value of large, curated datasets has significantly increased. This has led to a shift in how online platforms manage access to their content, with many choosing to limit or monetise access to their data.

As the AI industry continues to evolve, it’s likely that more companies will reassess their data-sharing policies, potentially leading to further changes in how information is indexed and accessed across the internet.

(Photo by Kelli McClintock)

See also: Google advances mobile AI in Pixel 9 smartphones

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: ai, content moderation, Google, microsoft, search engine


Chinese firms use cloud loophole to access US AI tech

Chinese firms use cloud loophole to access US AI tech

https://storage.googleapis.com/gweb-cloudblog-publish/images/5aTDDWJGweG3Gzj.max-2600x2600.png

Grounding Analytical AI Agents with Looker’s Trusted Metrics