The company said in a blog post: It plans to provide a new type of account for robots next year that identifies it as automatic.
According to the company, bot accounts can bring a lot of value to a service, but it acknowledged that it could be confusing for people if it wasn’t clear that those accounts were automated.
The company wrote: It may be confusing for people if it is not clear that these accounts are automated, and in 2021 we plan to provide a new account to differentiate between automated and human-managed accounts to make it easier for people to know what robotic accounts are.
Twitter has faced years of demands from disinformation researchers to reveal more information about the botnet, which has been used to amplify influence operations and make some narratives appear more popular across its platform.
Twitter began by asking developers to identify bots as robots in March, but resisted pressure to apply a certain classification, saying in May: The calls to classify robots do not solve the problem we are trying to solve.
The platform also said: It is building a new type of account to mark the 2021 remembrance of people who have perished.
Misuse of these accounts has also been a feature of media campaigns, as in the case documented last year involving the verified account of an American meteorologist who died of cancer in 2016.
Twitter announced last month that it would restart the verification program early next year, after it was shut down in 2017 amid criticism over how it gave the blue verification badges used to authenticate the identity of high-profile accounts.
It said: It will start removing the blue badges from inactive and incomplete accounts that do not adhere to the new guidelines from January 20, 2021, although it will leave inactive accounts to people who died while developing new memorial accounts.