IGF 2022 Day 2 Town Hall #83 Public interest algorithms for content recommendations

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> ISA STASI:  Hi, everybody.  I think we're all having a little trouble connecting.  This latest link should work for everybody.  I do have a couple people in the room, I'm not sure if they can actually, already listen to us?  Or see us?  Yes, it seems so.  Okay.  Good to know.  Thank you.  We still have a few minutes.  If you need a glass of water or whatever, go for it before we kick off.  Thank you. 

     >> MARTHA TUDON:  Hello, everyone, I think we can start.  It's really nice to be here.  I'm Martha Tudon.  We are here to talk about public interest algorithms for content recommendations.  Each participant will have five minutes.  When you start listening to sound, I'm going to play, it means you have to wrap it up because you're almost at time.

     Then we'll have a Q&A session and each participant will have a couple minutes, maybe one or two, depending on time, to deliver a key message.  They need us to hear before leaving this panel. 

     We're going to start with Maria Louise.  She'll give us a general introduction and tell us why we're here and why it's important to speak about this topic.  Maria, you have five minutes.

     >> ISA STASI:  Thank you.  So, today we're going to discuss with a variety of stakeholders, likely ways to incorporate in order to build content recommendations to achieve public interest objectives.

     Why is this important?  What we do have is recommendations for content recommendation and huge impact on the way people see access and share content.  So, those content recommendation algorithms have a huge impact on people's information values and on a collective level, on information flow in society.

     So, before these type of services and technology, we're talking about algorithms as a major impact among others on media diversity policy objective.  Which we believe are a fundamental pillar of a democratic society.

     The other premise I want to flag here is that we're going to concentrate on content recommendation algorithms, which is slightly different than content moderation algorithms.

     Content moderation, they focus on illegal content, illegal based on national rules or on the terms of services or specific platforms and it's usually about removal or suspension for accounts. 

     Content recommendation is a much wider environment about promoting, selecting content and we're not necessarily talking about legal content.  In fact, most of the time, I'd say we're not talking about the legal content alone.

     Now, this premise was fundamental.  It's not whatever we're going to discuss about won't work for content moderation as well, but I think we will need a separate discussion for that. 

     The third point I want to raise is that, the status quo for content recommendation algorithms, if we look around is that those are basically offered by private companies and especially by large social media platforms.

     A few of them offer content recommendation systems and they're optimized for profit systems, as they should be, because they're private companies and they don't take into account, public policy objectives. 

     That could be linked to the use of those recommended systems.  What we have is that a handful of algorithms are used by the vast majority of people, optimized for profit and sort of indifferent to public policy objectives. 

     The market is highly concentrated, we're talking about very few private companies that hold a lot of power.  The scenario is challenging for individual and collective level.  The focus of today's discussion is not about the harms that might or might not be created by this scenario, but how we can change the status quo if we want to.

     In particular, what I'd like to discuss today is which kind of barriers or market failures we need to overtake, if we want this change to happen.  Which incentives exist or do not exist to channel the status quo and who should create those incentives and how, and which kind of steps we should take to change this scenario, if we can rely on existing rules, if we need something new, which we're supposed to intervene, legislators, regulators, is this a business to business issue that can be solved, what's the role of users, people and technical aspects of it.

     If we're talking about a variety of content recommendation algorithms, is this feasible?  How we need to proceed to make a scenario, to agree to a scenario where this is possible.

     Our contribution today is to sort of throw a few ideas that we have to sustain our policy discussion.  The first question would be, basically, do we want to try to create content recommendation algorithms that are more sensitive or optimized for public policy objectives within the large platforms that already exist.

     So, we need to work with the large platforms to fix or adjust their own algorithms or we want to create more diversity.  We want to create a different environment. 

     We think the latter will be better.  It'll bring a number of positive outputs.  It'll decentralize the power of the content recommendation and not ask a few private companies to decide which level of diversity is needed by individuals and by society. 

     And which, will be a massive power for private companies to behold in a few hands, which we know, but also would create competition and the hope is that this competition, among different players will bring more quality, more innovation, more alternatives.  It will also empower users, individuals to make their own choices.

     Our way to solve this would be to go through competitive remedies that open up the markets, lower the barriers to entry, face barriers to entry, to this market which is currently quite closed, lower those barriers and encourage more actors to come in there.

     More specifically, we advocated for changes in Europe, recently, during the discussion for the Digital Services Act and we advocated specifically, we started from the point that content recommendations offered as a service in a bundle of services.

     We said maybe the larger line platforms in the world of digital services could be forced to unbundle the service and open the possibility for users to decide to go to another provider for this service and plug it in, in their, in their Facebook profile or their Instagram profile.

     Unfortunately, this proposal wasn't reported by the majority, it's not in the context of the Digital Services Act.  There might be a remedy through the Digital Markets Act.  Recently approved at the European level.

     There's specifically one article, article 612 of this Digital Markets Act that includes the impossibility to impose to the gate keepers, in that wording, probably the same platforms, to provide access to competitors, to business users, when it comes to their social network services.

     This will imply, of course, a degree of interoperability, otherwise, it's going to be impossible to plug in an algorithm into a platform.

     The technical aspects are, once again, very important here.  I'm curious to hear from other panelists what they hear about it.  This could be one of the possibilities. 

     Yet, there could be questions remaining there, also because I don't think there's a magic bullet here.  What we need to do is create a condition for an alternative scenario to flourish, but there might be a variety of components to being in here, not just one single rule to fix it all. 

     One of the major questions will be sustainability of ‑‑
     >> MARTHA TUDON:  You have like 15 seconds ‑‑
     >> ISA STASI:  I needless.  One of the discussions will be how sustainable are those different algorithms?  What could be the business model, the role of the state in this, in supporting, endorsing, et cetera.

     And the other thing is, do we need national rules for this or national remedies or do we need to be more ambitious and think about regional or international remedies?  Thank you. 

     >> MARTHA TUDON:  Thank you for providing the framework and introduction.  Now it's time for Ali‑Abbas Ali to speak.  Based on what you've heard, what should be the approach and what do we need to think about for this to be possible?  And when you hear the little sound, it means your time is up.  So please wrap it up, thank you.

     >> ALI-ABBAS ALI: Thank you very much for this invitation.  So, I'm approaching this question from the point of view of the regulator, with responsibility for media morality in the UK.  Wherever we end up in terms of harms and remedies, someone in our role like ours will have to implement it.

     With that perspective, I'd like to focus on three of the key questions that are on my mind.  What does the evidence say?  What is a response to the problem and will it work? 

     The UK evidence is really interesting.  We published a report on this earlier in November and it shows that people who use social media most often for news are likely to identify factual information.  They are less trusting of democratic institutions and news media. 

     All of that really matters.  We didn't find the same patterns for people who use search engines or new aggregators to access news.

     We did an exhaustive literature review, annex to our report.  We found social media use can have a polarizing effect.  Two studies show this on Facebook users in the USA.

     It was less clear through that literature review whether social media use causes changes in people's levels of news knowledge or in the levels of trust, so, that question is still open. 

     Now, our study was the first of its kind in the UK and importantly, it didn't allow us to identify causality.  It could be that social media has a polarizing effect or people with a predisposition of those attitudes gravitate away from social media.

     We need to do more hard work.  We really like to get an answer to that causality question. 

     In terms of the three things on my mind, the evidence says there is definitely something to worry about.  A lack of trust in news, being unwilling to engage in viewpoints all a functional view of plural media landscape.  Somewhere, social media is implicated.

     It's not specific enough to guide our reaction.  What it does tell us is there's a need for more transparency about how recommended systems and algorithms operate so we can better‑understand what's happening. 

     That takes me to my second question.  What's a proportionate response?  The short answer is we don't know yet.  The evidence doesn't lead us to a place where we can be certain what problem it is that we're trying to solve.  The public service algorithms provided by independent third parties that can give you an alternative set of recommendations to social media users could be a valid option and there's real power to it.

     But our view is it's a little too early to say.  Now, I have an admission to make, as a former Telecoms regulators, this idea resonates with me.  I can see parallels with introducing downstream competition and unbundling local loops and we know that worked, but we also know it was really hard, it was heavily litigated and there were casualties along the way. 

     That's why from where I am today, we need to keep the door open to collaborative working with the social media platforms.

     In terms of specifics and things that are going to challenge us, that we need to be ready for, I'll give you a flavor with two of those. 

     The first is commercial.  Last year, working with competition authority in the UK, Ofcom gave advice to the government on bargaining relationships with real focus on news publishers.

     We identified an imbalance in the bargaining power of platforms and publishers and advised a code of conduct could address this. 

     In doing that, we found that the overall value of news content that is being generated and traded between platforms and publishers may not be quite as large as you think. 

     There's a huge market here, but the organization of it that is generated by news media may be relatively small.  What matters here is that the pool of value to share between platforms and publishers is already contested.  It's not necessarily one where a new party that does need to be paid would be welcome. 

     And of course, if we're working with this, this hypothesis of third party independent algorithm providers, over time, they'll want to grow their businesses and that allowed a source of tension into what is already a tense relationship.

     The second challenge is consumer‑behavior.  Our research found that about half of people expressed desire to have more control over what they see online.  In contrast, a fifth were really happy to leave everything to the platform.  Internet is a big confusing world and someone's organizing for them.

     We then asked a related question.  And we found that about half of people always accept cookies.  Now, it's not an exactly analogous activity, but what it points to is something really important where we need to understand people's behaviors better.

     And that's the expressed desire for control and actually, exercising that control when you have it, may not be correlated. 

     Now, we also know that the way the cookies are presented and so on, it's awful.  There's a whole load of technical stuff that's related in that that I won't go into. 

     All of that, just leads me to being cautious.  Whatever solution we have is ‑‑ someone has to do the hard work of implementing it and making it work.  The risk is that you get into increasingly complex layers of regulation, in other words to address the sequential problems you face.

     That's something of our experience from unbundling local loops.

     We know introducing downstream players can work eventually, but it doesn't involve all of the pain and risks that are involved with it.

     That's why from a regulatory perspective, we're still keeping an open mind on the appropriate remedy route.  What I can promise is we'll be doing more work on this.  Our aim is, over the course of the coming year, to develop a set of proposals that we present to UK government of how it might address the situation in the UK.  And to consult widely on those. 

     To everybody who is here and interested, you know, we have our doors wide open and are willing to listen to all and every view on this and take them into account.  We recognize this is complex.  We recognize the solution may needs international collaboration and so on and we're probably not going to be able to get there on our own.

     And I hope, Martha, with that, I'm just in on time.

     >> MARTHA TUDON:  One minute, but it's okay.  Next is Natali Helberger.  What is your view on this?  Do you have other considerations or ideas we have to keep in mind when discussing this?  Five minutes.

     >> NATALI HELBERGER:  Thank you so much, Martha, for inviting me and for putting this important topic on the agenda.  I think we, we cannot all overestimate the importance of exposure diversity and why.  Exposure diversity, so, exposing, giving people the opportunity to read and hear news and media content from a variety of sources, adhere to different tastes and preferences in society is simply critical to advance a lot of goals and public values that are important to us.  We need exposure with public diversity to promote tolerance and inclusiveness in a public debate, to decrease polarization and increase political knowledge. 

     Exposure diversity is a very important means to achieve a lot of goals in society. 

     I agree with you about the importance of promoting initiatives to build more diverse systems.  I think in the past years, our focus has been very much grown of how to deal with platforms, how to govern platforms and I think, in all these discussions, sometimes there was a tendency to forget a bit that there's a whole broad media market out there and that we also needs to look into how digital technologies can create new opportunities for diversity also there. 

     I think the Ofcom in this engage from the past years were important in putting this aspect on the agenda.  There are really good initiatives out there.  Only last week, I've been participating in a one‑week workshop with academics and media organizations from Europe, from Canada and Brazil who were experimenting with ways to build more sophisticated algorithms that expose people to more diverse views, to enrich their diet, to broadening the horizon and by doing so, actually make media much more fun and interesting.

     I think we shouldn't forget the fun factor here.  People, that is also something our research shows, are not interested in seeing more of the same and only content that is optimized to making them click on advertising. 

     Actually, a lot of our respondents indicated they very much care about not missing important viewpoints.  They care about being news and media content being diverse, surprising and entertaining. 

     And that are important considerations for them and also important distinguishing factors in, in what could make for them, a difference between social media platforms and media out there.  This is something the media need to see how to develop further as a selling factor and something that makes watching media more attractive and fun. 

     The problem with these initiatives is to be successful, they need support, they need institutional support, they need room, time, funding to invest in systems that optimize beyond clicks.

     Because these are complex questions.  This is nothing that you build in two months.  So, I think also, within organizations, there needs to be more time and more room and more encouragement to also think ahead. 

     And not only focus on short‑term clicks, but also think about more diversity‑sensitive metrics.

     And I must say that even within public broadcasters, where you would assume that their mission is to provide diverse content and developing more diverse recommendation systems, even there, there's still a tendency to optimize for clicks. 

     So, I think there's work to be done, also to create a favorable environment and encouragement within the institutions to experiment with digital technology, how we can make it better, more‑diverse and more‑interesting.

     I think that's an important point to take into account.  At the same time, Isa, I don't share your pessimism about the DSA.  I think what the DSA said is, first of all, I think it's good to acknowledge that we have here, really ambitious, experimental pieces of legislation, of which we can also learn outside Europe. 

     And I think a lot of eyes are on Europe trying to see how this experiment will turn out and I think we have a great opportunity to make it work.  Threats to pluralism and threats to intolerance discourse are explicitly systemic risks and exposure diversity, as I tried to explain, is important means of mitigating those risks. 

     Social media platforms are required to have risk mitigation strategies in place.  Which can include adapting their design and content moderation algorithms and should include initiatives to optimize for more clicks and accuracy, but for example, also, diversity. 

     This is a really complex task.  There are fair arguments to Blake that we should, and that it makes a lot of sense, also, for large media, social media platforms to also look to the outside in all the creative initiatives that are ongoing and experimenting with diverse news recommenders, outside platforms, how to give those space within the platforms. 

     There's one provision into the DSA telling, mandating or suggesting that platforms should give users a choice between different recommendation parameters and again, people appreciate this and they're looking for this.  This is also an opportunity to think beyond what the DSA does right now, profiling or not.  But to see this as an invitation and springboard to launch recommendation algorithms, including those that are developed outside the platform.

     I'm actually very in‑favor of thinking along these lines and trying to figure out ways to incentivize doing so. 

     And I think with this, I would like to conclude.  Thank you. 

     >> MARTHA TUDON:  Thank you so much, Natali.  Now we have our last speaker.  Michael Lwin.  What is your reaction to this proposal from the technical point of view and what's basically the technical side of this and you have five minutes.

     >> MICHAEL LWIN:  I think sharing my screen might be helpful.  I think, this is a Facebook post.  This is what a Facebook post is.  So, to prove it, here's the link and data for the Facebook post.  Let's paste it.  And there's your ‑‑ there's your Facebook post.  It's a, you know, a Myanmar television station, has a video and it's covering a football game, right?  Here's the same text as the text in the post.  Whoops. 

     >> MARTHA TUDON:  We can only see the code, not the Facebook page.

     >> MICHAEL LWIN:  Okay, here's the Facebook post, you can see it now, right?  Correct? 

     >> Yes, we can.

     >> MICHAEL LWIN:  This is the post, I pasted the URL from this and this is Facebook's actual post data.  So, my response to it is, all the comments are great, but I think the big tech companies are relying on regulators, maybe, relative lack of understanding of how the sausage is made and I, whereas a very, very, like, you could solve this today. 

     It's actually easier than you think.  I actually don't think it's terrifically complex, it's just that it requires technical ‑‑ I say this like, having both degrees.  I have a computer science Master's Degree.  This is Facebook's own API called Crowd Tangle, they exposed.  Twitter has their own API.  Instagram doesn't have one yet.  So does Reddit. 

     Here's your likes ‑‑ the number of likes, 1008 ‑‑ let's see if it's right?  Over 1K, okay?  There's a breakdown of your shares.  This is JSON format and these are key value pairs.  This is the anatomy of a post.

     So, these social media companies, all you have to do as a regulator, in the U.S., okay, maybe you have complicated first amendment claims, but certainly the UK and EU could pass regulation on this. 

     Requiring social media companies to essentially expose these APIs, either publicly or maybe there's some gating requirements.  And then you would have third party ‑‑ this is like the ‑‑ you know, the Baby Bell, breakup of AT&T earlier.  Instead of telephone lines or spectrum, it's data, right? 

     You know, social media companies have network effects.  They're very well run and have all these users and it's hard to compete with that, right? 

     So, really, regulation compelling public APIs where providers all, all providers need is data.  And then, creating training and algorithm on the data for diversity, for policing of hate speech under international human rights law or other paradigms is not that hard to build.  It's just a matter of time and having the data. 

     So, in my view, regulators, if they passed regulation requiring, let's say, you could pick like, under U.S. law, it's like, you know, (?) thresholds.  The evaluation of a company is above a certain threshold, a big tech company, they have to expose public APIs of their post data and allowing independent third parties to consume that API and build the algorithms they want to build.

     And maybe prompt the social media companies to feature algorithms to filter posts.  If you want ‑‑ just making this up.  If you want an algorithm that filters posts strictly under the ICCPR's Article 19, right?  That could be deployed in Facebook or a third party app version of Facebook.

     If you wanted like the ACLU's version of it, you could, that could be deployed, right? 

     And the key is just the data and then, perhaps, funding for initiatives for computer scientists and developers to build it.

     That's it.  And once you do, that that would ‑‑ I think, essentially be the equivalent of breaking up Ma Bell into Baby Bells.  People could pick and choose which social media platform to filter based on how the content would be filtered. 

     And they would be able to leverage the network effects, you know, Facebook has whatever it is, over 2 billion monthly active users.

     They'd be able to leverage those network effects in a way that's pro competitive.  And I think this is basically the solution.  In the absence of regulation, so, Facebook already has very restricted access to this API and there's rumors that they're going to deprecate it, meaning like, shut it down, right? 

     Twitter has its API available, but maybe would change, with current change of management, maybe they will open it up even more.  I think Elon Musk has expressed interest in that in the past or maybe they shut it down.

     If a regulator compels the social media companies to build out these APIs and with specific endpoints, show all posts, be able to search through posts, then that data would be free.  You'd anonymize that date, right?  This is only public accounts data.  People who have consented to having their data public, it's not private account data.  I think that's important. 

     Public accounts are like the social media influencer pages that are having such an affect on public discourse.  So, so, my view is this solution is out, wide in the open.  Regulators may not have a full grasp of the technical like, scalpel instead of sledgehammer.  You could pass along this and it would work.  I'll end my comments with that.

     >> MARTHA TUDON:  Thank you so much, Michael.  Now we have time for Q&A.  And we welcome questions from the audience.  You can see there's a lot of people there.  If you want to participate, please go next to your microphone. 

     >> Shall we just ask the question?
     >> MARTHA TUDON:  Yeah, who is speaking, I'm sorry, I can't see.

     >> I'm waving a little bit.  I'd really like to thank you, also, for bringing this panel and kind of tackling this problem in such a constructive way.  I really enjoyed the presentations.  I have one short question for the last speaker.  What's the limit of ‑‑ or do you see, also, dangers in which, when the state determines, maybe, also together with Civil Society, these questions to such a great regard? 

     I think it's like an operation, the open heart of democracy, we cannot let companies do it, but it's also very complicated to, to kind of determine this from ‑‑ publicly.

     The second question, I was really interested in professor Helberger's idea to increase choice by regulation.  I feel that the DSA, so, the idea would be to force platforms to give consumers more than one option.  And I feel that the DSA actually stopped short here.  Because the only mandatory alternative in the DSA is basically, to give one option without personalization. 

     So, my question would be, whether we could be a little more progressive, like in the online harms builds.  I think here, there's a clause for user empowerment, if I still have the last information on that. 

     So, there's a clause on user empowerment, which actually is more of an obligation to, to force the companies to offer more than one, one option.  And I feel this could be a step towards autonomy by design because then, the users would start choosing and would also present, be presented with a meaningful choice, the same way they now use cookies or more often decline cookies because they have a meaningful choice in many ways.

     Could this be a solution to offer a meaningful choice to bring public interest algorithms in. 

     >> MARTHA TUDON:  Thank you for your question.  I'm going to read another one and then ask speakers to give short, concise answers to all the questions.  The next question comes from (?) from ISOC Argentina.  To what extent exposure to diverse news sources would ensure actual consumption of those news?  Is there a risk of pushing some users to alternative more‑partisan networks?  And she gives some U.S. scenarios.

     So, I'm going to ask the speakers to please raise their hand, to please answer the questions.  So, I think, Michael, you were first.  Go first, please?
     >> MICHAEL LWIN:  So, I think to the ‑‑ to the first of the ‑‑ I think the yes around choice, right?  I think, yeah, you know, I think Civil Society, so right now, the big tech companies have all the cards.  When Civil Society does review, it's because Facebook gives them a grant to do the review.  Facebook can just stop giving them grants, which is what happens.

     If you just had publicly‑exposed API endpoints, then, Civil Society can have a particular content point of view.  If, you know, other, other, if FOX News wants to go and train, they can do that.  And then, let consumers choose, basically.  And they can choose what version of Facebook they want to look at.  That might lead to a different kind of attack on problems. 

     That's certainly possible. 

     But to me, it at least increases consumer choice.  I'll let other people respond.

     >> MARTHA TUDON:  Natali and then Ali.

     >> NATALI HELBERGER:  I couldn't agree more with you.  I had hoped for more obligation to offer different choices, right now, it's mandatory.  I should also probably have added another important international standard‑setting organization.  Europe went a step further and also very much encourages to offer more diverse choices and choice from different recommendation algorithms.

     I still think that the risk mitigation provisions in the DSA, they could be read as a means to encourage the providing provisions or conditions for external and, players in this area. 

     As mentioned earlier, I think exposure diversity is a solution to a lot of these problems.  That we're trying to tackle and there's a lot of expertise out there and I think it's, it's a wise choice to see how we can make them work, also for platforms and by this way, make the offer more appealing to users.

     I could even imagine that there's a business argument to be made, why it's a good idea to open up and I really, really liked your presentation, Michael, on how to ‑‑ how this is actually much less eutopic than we thought. 

     I think it's important to point out, exposure diversity isn't the same as consumption diversity.  So, it's merely offering people a more‑diverse choice that they would otherwise not see. 

     So, it is enhancing choice, but not forcing anybody to watch particular things.  Again from our research, this is something that a lot of users would appreciate greatly. 

     >> ALI-ABBAS ALI: Philosophically, we're very much in a position that today, choices are being made for consumers.  Facebook or whoever it is, is interpreting the things you like, the things you read, the things you follow and is giving you choices on the basis of that, but the user is not, necessarily, in any way, active in that relationship.  They're passively being given information off the back of that.

     And what matters here, for us, is to what extent does that change the user's behavior?  The question of choice then becomes really interesting.  Once you give people choices, will they act on them?  And that's the comment that we have in the chat.  Points to that. 

     But the role of the regulator here, certainly, as our role is constituted, is just to make sure that the choices are available.  That there isn't a party that controls your access to the breadth of plural media that's available.  That you, as a consumer, as an audience, as a citizen, can reach those choices. 

     You may, then, choose to say "I will go to a heavily‑right wing source," "I'll go to a heavily left‑wing source" or whatever.  I'll read hate speech or misogyny.  That's a level beyond where you start interfering with people's freedom to receive information.

     And that's a hard barrier that we need to really think about, carefully.  The limits of where we are and the limits of where we think the plurality regime intervenes, of the wealth of information available to you, it's not being constrained by someone not in your direct control. 

     >> ISA STASI:  I think what we're discussing here is competitive issues to open up a market.  There cannot be competition if there's not education to the freedom to choose.  Every time we talk about the possibility to empower users to take a little bit more space, that's been taking off from them to make choices, it's sort of an essential component of adding competitive and open markets. 

     I do believe, I agree on the fact that we've been made to ask.  People, their behaviors show, a number of people, they're reluctant to make choices.  We think we've been made this way.  We've been made to believe that we have to lose our imagination and this, you know, take it all or lose it all situation.  But I think we, as Civil Society researchers or regulators, we need to make sure that the discourse, the boundaries of the discourse are much more stretched.  Than what we're being made to believe. 

     Also, in this situation, I do agree, that currently, there is risk that people could make a number of choices that we might agree or disagree with, that's okay. 

     But it's part of the autonomy.  We need to balance off much more paternalistic and invasive solutions, and actually, currently, the sort of paternalistic decisions will not be taken by any public power, but mostly by private power.

     Which, per se, is even less legitimated to take these decisions.  So, I think we need to look at the big picture here and see there might be risks and trade‑offs, but are those which society can take?  What are the other trade‑offs if we don't act this way? 

     I think it's not a one‑way route.  It's not so simple.  We might not fix it all with one specific route, but we don't need to be afraid of creating chancing for people.  We don't need to be afraid of tackling a challenge.  There's a risk people will go somewhere else, fine.  We might have literacy policies that we can use.  Education, this freedom to choose.  We might need to educate, people, consumers, individuals to this freedom.  But I would go that direction of carefully ‑‑ I'd be very careful not to take the opposite direction.  Yeah.

     >> MARTHA TUDON:  Thank you, Isa.  Is there any other question from the audience?  We still have time for one more. 

     >> My name is Brandy and my question is about the proposal of diversification of news media sources.  How does this scale when content classification, particularly video content classification, particularly in languages outside of English is still such a nascent field, how do you do that effectively to ensure it's not just creating a list of major media brands and ensuring exposure on both sides, but considering that people develop, you know, political and social viewpoints from say, influencers or bloggers or you know, lots of other media sources?  Thanks.

     >> MARTHA TUDON:  Thank you, Brandy.  Anyone wants to answer? 

     >> ISA STASI:  I don't have an answer, but additional elements I can raise.  Maybe this is more technical than a policy issue.  I believe this has a lot to do with the fact that, when we talk about content recommendations right now and systems we're dealing with right now, they have a global scale.  And so, they operate on a global scale and they usually set off the mainstream and they apply to mainstream as it is in each and every other context.

     They're detached from a lot of different context.  And this is true for content moderation as well as content recommendation and video content classification, et cetera.

     Maybe a way to address this issue and this is an open question, is to bring their provision of the service closer to the community that is going to use this service.  Those providers that will understand the context better.  They'll be able to do a batch classification and understand a lot of details that will be missed if the global ‑‑ the big platforms make reasoning a one‑stop shop solution for everybody.

     >> MARTHA TUDON:  Thank you, Isa.  Ali?  Then Natali and the wrap‑up session.

     >> ALI-ABBAS ALI: One of the things you need to be acutely aware of here, is the probability or possibility of embedding business models and that you foreclose on the ability of innovative models or new players to enter a market, because the regulation you have allows Facebook, I opened my API as Michael suggested and therefore, things are happening.  We don't have the scale to do that and so on.

     Find the ability to launch them are the regulatory environment, it's more difficult.  So, it is always a risk.  Whenever you introduce a piece of regulation that embeds one particular way of working or favors one particular way of working, what does it do to the rest of the market?  It's something you must have in the forefront of your mind as you go forward to regulate.

     I'm not saying it's an intractable problem or can't be solved, but you can't lose sight of it.  You might make things worse rather than better. 

     >> NATALI HELBERGER:  Three things.  First of all, to build on what Ali said.  Partly, these are new questions, right?  We are experimenting.  Also with regulation and I think, so, something that I found very convincing, for example, being forwarded in the AI regulation is this idea of sandboxing and creating environments that actually allow companies, as well as regulators, to experiment and see what works and not.

     So, I think we also need to change our ideas of regulation and allow for more agility here.

     The second thing, I'd actually like to have a word of caution that scaleability isn't the only important argument.  You mentioned more communities, there's a lot of demand in local communities for more diversity and recommendation, so, scaleability isn't everything, and the third thing is, the question that you ask, also refers about how do we make it work in practice, right? 

     So, diverse recommendations, there's no technological quick fix.  It requires us to invest in metadata and skills and training and models and that's, I think, why this is important that we also think of how can we create an enabling environment for making progress in diversifying algorithms, creating more diverse metrics and understanding better, what all the good things are that we can do with recommendation algorithms and I think that is important to keep in mind.

     >> MARTHA TUDON:  Thank you, Natali, so, now we are finalizing this session.  We have one last round.  Basically, we have a question for all the speakers.  If you had the power to address these issues tomorrow, who would be the key stakeholder you would address and what would be your first task?  You each have one minute, and if it's okay, we can start the other way around.  Meaning Michael, then Natali, then Ali.

     >> MICHAEL LWIN:  What was the question exactly?
     >> MARTHA TUDON:  You have the power to address this issue tomorrow, who would be the first stakeholder you'd address and what you'd ask?
     >> MICHAEL LWIN:  Require big tech companies above us, based on size, a certain threshold, to publicly expose APIs related to post data and just, you know, there's probably like 20 endpoints.  And require them not to like, throttle or hide the data in any way.  And I think this would ‑‑ and then, maybe, also, you know, maybe they have to have some screening like the app stores.  But if a, if a third party provider has training algorithm on a dataset, then they would have to list that as a filtering option in like, Facebook's news feed or Twitter. 

     You could literally have a drop‑down men to filter.  That's what I'd do.  I emphasize, if that's done, computer science work is much easier than you think. 

     >> MARTHA TUDON:  Natali, if you had the power to address this issue tomorrow, who would be the first stakeholder you'd address and what would you ask?
     >> NATALI HELBERGER:  I'd ask all the people within media organizations, within platforms, but also standalone services that are already experimenting with more‑diverse recommendation metrics and measures and trying to figure out ways to move more into the direction of public service‑inspired algorithms.  I'd address those and ask what they need to make this work and to succeed in what you're going.  And try to give it to them. 

     >> ALI-ABBAS ALI: First of all, I'd agree with Michael and Natali.  I'm going in a slightly different direction.  And our stakeholders being citizens, the thing I'm acutely conscious of is I'm a generation X regulator.  But the market in which the, the regulation is going to land is one which is going to be increasingly dominated by generation zed.  We call them digital natives.

     And I don't think people of my generation, certainly, have a really good understanding of the way that younger people use and understand the digital environment. 

     The stakeholders I'd like to talk to and whose needs I'd like to understand are those who have grown up with the internet.  Having early conversations with them, demonstrate they are incredibly clever, incredibly savvy and understand their world in a way my generation can't.

     If I'm setting regulation that will last 10, 15, 20 years, I need to be conscious of that's the group I'm regulating for and need to understand what they need a little more. 

     >> MARTHA TUDON:  Thank you, Ali, Isa, same question.

     >> ISA STASI:  It'll be very difficult to pick one and add on something meaningful.  I'll put it this way.  I do believe that everything said is absolutely crucial.  I'm still personally convinced that one of the main routes of this problem is the concentration of power.

     So, I do believe that the first person to talk to is a regulator and say, incentives to open these environments are not there.  We need to create incentives, you need to create incentives and how and when and which direction and what exactly do we need, we need to discuss with all the other stakeholders.

     Without possession of certain, certain specific obligations to open environments, this won't happen.  It hasn't happened in 20 years time and I don't see why the situation will be changing rates in the next five to ten years.  That'd be my choice, I suppose. 

     >> MARTHA TUDON:  Thank you, Isa.  Now I'll give you the power to answer the very last question.  Sorry, I have to pick someone.  This is the last question, is this idea of middleware an expanding choice a recipe for fragmentation of a public sphere?  Like in one minute, if you can answer it...

     >> ISA STASI:  Should we go the other way around?  Or I'll just try to ‑‑
     >> MARTHA TUDON:  30 seconds and then maybe someone else.

     >> ISA STASI:  I believe the idea of creating middleware isn't necessarily an alternative to having a public sphere.  The way the different communities and recommended systems need to interact with the existing, big platforms is to be decided, it needs to be defined.  When we talk about interoperability, we can talk about different degrees of it and allowing people to work within that platform and a competition between the different platforms.

     We can keep a public sphere, let's say, populate this public sphere with a number of alternatives for people and we can make it work, I suppose from a technical perspective. 

     I don't see those two as black and white situations.

     >> NATALI HELBERGER:  Right now, platforms are highly personalized.  They already have quite fragmentation.  What we're discussing here is probably also a solution to that problem. 

     Plus, I think we should remember, many people likely do not only receive the news from platforms, they also receive their news from, from media, from legacy media and that is also ‑‑ another argument, why it's important to not only look into solutions within platforms, but also, outside platforms, how to strengthen a resilient media ecosystem and I think we really need to think about these questions very hard, especially in light of very strong, powerful players on the market.  How can we protect a resilient and diverse media system?  I think that's the key response to the fragmentation concern.

     >> MARTHA TUDON:  Thank you, Natali and thank you, everyone, for being here.  It was a pleasure for Article 19 to host this panel.  So, we'll see you later.  Thank you.  Bye‑bye. 

     >> Thank you, bye.