Quick Take
Summary is AI generated, newsroom reviewed.
Wikimedia will integrate generative AI into Wikipedia services.
AI will assist in reducing the workload for human editors and volunteers.
Focus will be on enhancing quality control and technical efficiency.
Wikimedia, the nonprofit that runs Wikipedia, has announced that the online encyclopedia will be integrating generative artificial intelligence (AI) into its services. The NGO said that it will be utilising generative AI in specific areas where it tends to excel.
Amid concerns that AI could replace the editors, moderators and volunteers, Wikimedia made it clear that AI integration was meant to reduce the workload of its human workforce, who would now focus more on quality control.
“We will use AI to build features that remove technical barriers to allow the humans at the core of Wikipedia to spend their valuable time on what they want to accomplish, and not on how to technically achieve it. Our investments will be focused on specific areas where generative AI excels, all in the service of creating unique opportunities that will boost Wikipedia’s volunteers,” read the statement by Wikimedia.
“We will take a human-centred approach and will prioritise human agency; we will prioritise using open-source or open-weight AI; we will prioritise transparency; and we will take a nuanced approach to multilinguality,” it added.
Also Read | Liverpool’s Premier League Win Triggered Seismic Tremors At Anfield
AI and Wikipedia
AI will also aid editors by assisting in the onboarding process of new volunteers and automating other tedious tasks.
“We believe that our future work with AI will be successful not only because of what we do, but how we do it.”
Wikipedia already uses AI to detect vandalism, translate content and predict readability, but up until the announcement, it had not offered AI services to its editors.
Wikipedia is facing an existential crisis at the moment, with experts predicting that AI could eat it alive in the coming years. In recent years, the number of bots scraping Wikipedia’s website to train the large language models (LLMs) has increased at an exponential pace. The bot traffic has overloaded the Wikipedia servers, leading to an increased bandwidth consumption by 50 per cent.
In an attempt to fend off the hungry bots, Wikipedia last month released a dataset that’s specifically optimised for training AI models.