Excerpt from Thinking outside the blackbox.
Algorithmic Sovereignty in Theory
With ‘algorithmic sovereignty’ in social media we generally intend to regulate the use of information filtering and personalization design choices according to democratic principles, setting their scope for private purposes, and harnessing their power for the public good. In other words, to open black-boxed personalization algorithms of (mainstream) social media to users and public institutions.
While there have been discussions on the ethics of recommender systems and personalization (Bozdag, 2011; Helberger, 2019; Milano, Taddeo & Floridi 2019), and on how to fulfil main algorithmic principles such as fairness, transparency, accountability, accuracy and privacy (algo:aware, 2018), in this hypothetical radical scenario, algorithms could even be exchanged, remixed, tested, plugged, and even sold or rented. On top of this opening, it is expected that not all social media users would have the knowledge to build their own algorithms. Moreover, one could assume that some influential organizations (such as news media, political parties, ideological groups etc.) would also spread their own algorithms to their followers. Fragmentation and manipulation may still be present. Therefore, we identify and discuss prominent intertwined preconditions – certainly not exhaustive – to grant in social media what we intend as “algorithmic sovereignty”:
First of all, and more obviously, ‘algorithmic literacy’. We can define it as the basic knowledge on how filtering mechanisms and design choices function and what their impact on one’s own life is and may be. It allows you to evaluate differences among two similar mechanism, as well as to have a metric which describes one’s own informative diet so that the individual would always keep in mind that other filtering logics exist and are under their control. This could result in useful visualization tools and dashboards. Some paradigmatic experiments have already been done: i) to show users their filter bubbles and help to ‘burst’ them (Nagulendra & Vassileva, 2017); ii) through informative browser extensions; iii) with interactive sliders such as in gobo.social. Clearly, in order to gain more control algorithmic literacy need to go hand in hand with digital and media literacy.
Another fundamental precondition to attain is ‘algorithmic neutrality’ which is the idea that content should spread freely based on its merits, thus with no media bias. This could take various forms and degrees. More generally, it might be achieved with ‘counter-profiling techniques’, meaning to conduct data mining operations on the behaviours of those that are in the business of profiling (Delacroix & Veale, 2019). More specifically, platforms should return machine-readable, unfiltered, chronologically-ordered data. A reader device would receive these data, and then an algorithm would filter and prioritize. This could be a fundamental architectural requirement for strong algorithmic sovereignty. By making sure that the filtering happens on the client side, we ensure the platform becomes effectively neutral, and nobody, except the individual, will end up knowing what was watched and for how long. Such neutrality would also aim to prevent deception by design.
Alternatively, and more realistically, large online platforms could routinely disclose to their users and the public any experiment that the users were subjected to with the purpose of promoting engagement.1 They would also have to establish an independent review board to review and approve experiments in advance. This would be particularly in line with two new proposed human rights: the ‘right to not be measured, analysed or coached’2 (van Est, Gerritsen & Kool, 2017) and a ‘right to cognitive sovereignty’ that ought to protect individuals’ right to mental self-determination (Yeung, 2018b).
Further, the development of natural metrics usable to evaluate the informative experience produced by the algorithm is another critical point at stake. For example, the percentage ratio of photos/videos/text, the time spent on the platform, the number of posts encountered, or which sources have been consumed. This information should be collected on the device and given back to the user to increase self-awareness.
It is also fundamental to guarantee a strong separation between filtering mechanism and data. Filtering mechanisms should be, as much as possible, stateless and idempotent, that is, capable to return the same result if executed twice. This is a formal way to describe a more easily auditable machine. Neural networks, on the other hand, keep updating their internal state, making it impossible to perform certain analyses either ex-post or ex-ante. If the same can be accomplished using a simpler technology, it would also be simpler to explain, standardize, teach and judge. If it is possible to keep a simpler technology as standard, the technical debt caused by the lack of explainability of algorithms might be avoided. Moreover, metadata – a special kind of data which describes the content produced – is important to control. We argue that functionalities of research and prioritization might perform better with more metadata usable by end-users because each of them is an opportunity to differently index and retrieve content. Producing content might be a more laborious process, but we deem it acceptable because designing a model to provide quality content would likely cause less quantity.
There are, then, user-centered preconditions which generally concern the average user. Firstly, design choices and affordances – being part of personalized information consumption – should be to some extent adjustable – especially for what concerns attention management. For example, disabling the auto-loading of posts (similarly to the auto-play function for videos which is usually set by default) and decide to scroll n posts anytime one logs in. Similarly, Youtube already provides a “take a break reminder”.1 As of now, this feature, however, has some limitations; it is available only on certain mobile apps, it is not set by default and it is not easy to find. Another interesting example is the possibility to hide any metric (as the plug-in Facebook Demetricator affords) in order not to be influenced by – if not addicted to – comparative quantifications (such as the number of likes and shares).2 The quantification and maximization of social interactions, in fact, contribute to create a “culture of performance” (Castro, 2016) that seems to be negatively correlated with well-being (Verduyn et al., 2017). In sum, all those basic design choices that ultimately affect information behavior and personalize one’s information diet ought to be present and adjustable.
Moreover, there are information self-determination and media pluralism concerns to cope with. In particular, the capacity to reach a balanced information diet from both an individual and a political perspective (Eskens, Helberger & Moeller, 2017) and, at the same time, make sure that a range of viable ‘perspective widening’ tools are provided (Delacroix & Veale, 2019).
On the one hand, a ‘right to receive information’ – as guaranteed by Article 10 of the European Court of Human Rights – recontextualized in the digital context could help to establish what news consumers legitimately may expect from the news media with respect to the diversity or relevance of personalized recommendations (Eskens, Helberger & Moeller, 2017). In practice, this right could result in an effective explicit personalization – especially related to political news. For example, a “serendipity button” for news and opinions, allowing people to opt in and receive challenging viewpoints in a wider ideological spectrum, especially during elections, has already been discussed (Sunstein, 2017).
On the other hand, a ‘right to information explorability’ to increase serendipitous encounters and to reduce potential filter bubbles and echo chambers. This could be achieved through a right to choose the accuracy of information filtering and a more general right to information findability and discoverability. In practice, it might imply more interactive control of the algorithmic outputs and, as such, discoverability throughout sliders, topic categories, filters to navigate the information by source and keyword, or even algorithmic recommender personae as specific avatars to filter information (Harambam et al., 2018). As the philosopher Mireille Hildebrandt (2019) advocates, societies must demand companies to explore and enable alternative ways of datafying and modelling the same person (e.g. a multiple and dynamic profiling could be paramount to this scope).
Final Considerations
Certainly, the above outlined general suggestions have many limitations and may have potentially unintended and undesirable consequences, impossible to systematically assess or even speculate in a short article such as this. Yet, we briefly identify some major weaknesses and critical issues.
If institutions empower users who can’t exercise their algorithmic sovereignty, they may rely on default choices, and this may only leave aside the weaker users and enable other actors as well as mainstream social media. This scenario is not different from a free market where the players are not equal, and it might even legitimize current mainstream platform at the expense of potential emerging competitors. Now, if we imagine the “liberalization” of algorithms would happen tomorrow, it is not hard to believe that a technocratic group of skilled individuals would try to claim their algorithm to be the best. By leveraging existing conditions of influences, they might just repropose the same form of algorithm oppression but with a variety of small actors in the market.
Also, this liberal market approach might lead to the misleading message that an algorithm can be better than another one. The perfect algorithm, in absolute terms, does not exist. Every one of us has different priorities, interests and time availability. The correct algorithm is the one that best fulfills the needs of the individual, and this might not be true anymore if one begins to change. The fitting algorithm thus cannot be permanent. The above mentioned “natural metrics” should be imagined as the regulation imposed in the food industry (to declare the allergens, ingredients, kilocalories etc). With this essential transparency, let other actors create their algorithm might still allow a comparison from the outside. We might assume that negotiating these parameters should be an effort made and/or supported by the current democratic institutions and by expert network bodies, such as the w3c, IEEE, or IETF.
Because algorithm sovereignty is a political challenge, the solution cannot be only technical. Algorithmic sovereignty indeed implicitly calls for more responsibility over citizens which ought to be able to decide on their own instead of delegating to someone else. We believe that institutions like schools, academia and public service media ought to be proactively responsible for the most significant precondition: algorithmic literacy.