X is piloting a program that allows AI chatbots to generate community notes
Social Platform x Pilot the feature This allows the AI chatbot to generate community notes.
Community Notes are a feature from the Twitter era that Elon Musk expanded under the ownership of the service, and is now called X. Users who are part of this fact-checking program can provide comments that add context to a particular post. For example, you may see a post of an AI-generated video that is not clear about the origins of its synthesis, or a community note as an addendum to a misleading post from a politician.
Memos will be published when consensus is achieved between groups that have historically opposed in past assessments.
Community Notes are successful enough to get inspiration with X Meta, Tiktokand YouTube To pursue similar initiatives – Meta It has been eliminated Its third party fact-checking program in exchange for the labor provided by this low-cost community.
However, it is still unclear whether the use of AI chatbots as fact checkers will prove useful or harmful.
These AI notes can be generated by connecting to X through an API using Grok in X or using other AI tools. Memos that AI submits are treated the same as memos submitted by people. This means that you will go through the same review process to promote accuracy.
Given how common AI is, the use of AI in actual checks seems suspicious Hallucinationsor construct a context that is not based on reality.
According to the paper Published This week, researchers working on X Community Notes recommend that humans and LLM work in tandem. Human feedback enhances the generation of AI notes through reinforcement learning, leaving the human note evaluator as a final check before the notes are published.
“The goal is not to create AI assistants that tell users what to think, but to create an ecosystem that allows humans to think more critically and understand the world better,” the paper says. “LLM and humans can work together in a noble loop.”
Even with human checks, there is still a risk that they will rely too heavily on AI, especially since users can embed LLMs from third parties. For example, Openai’s ChatGpt recently experienced problems with models being over-performed. sycophantic. If LLM prioritizes “usefulness” over accurately completing fact checks, the comments generated by AI can be inaccurate.
There is also concern that human evaluators will be overloaded by the amount of comments generated by AI, reducing the motivation to properly complete this volunteer activity.
Users don’t need to expect AI to still display community notes generated by AI. XX plans to test these AI contributions for a few weeks and then deploy them more widely if successful.