24 May 2019

Liquid Representation

Representative concepts of democracy can be thought of as having a viscosity; a lumpiness. Today's versions are extremely lumpy: you vote for one person to represent you on every topic, and you vote only once every 4 years or so. So there are three dimensions: the people who represent you, the topics they represent you on, and the frequency you get to select them. If we put the concept of representation in a blender, to make the most liquid concept, we get a sentence like: you can vote for whoever you want, to represent you on whatever you want, whenever you want.

Of course there are reasons for the infrequency of voting. One is the cost of course; I recently learnt that in France it costs 250 million Euros to hold a general election (not including the economic cost of people not working during that time). Another reason is to give those in power enough time to enact the policies we voted them in for; otherwise, it would be very hard to make (more) significant changes in government.

A few years ago I sketched a vision of democracy, based on my interpretation of liquid representation to elect "trusted thinkers". The idea is that people express trust on topics to others, and they can change it when they want. Trust is transitive: if I trust you on a topic, and you trust someone else, that someone gets some of my trust too. And now, when a decision needs to be made we can efficiently (programmatically) select a small set of people based on what skills are needed to make that decision. If the network of trust is dense enough, most people will have someone they trust (indirectly) in the decision making group. There's lots of questions we can ask about such a system, and the answers depend on further questions we don't know; after all we don't have any human liquid trust systems yet... So this is mostly hand waving for now, but...

In the years that passed since my first blog post, I noticed a few things:

* Liquid representation does in fact already exists: it's called page-rank, and it is a key ingredient of search engines - it's the key idea that made google succeed: each page can express trust on the topic of the text link to the page it links to. When you search, you are basically asking for the most trusted pages on that topic. (And lots of details of course, like non-exact text matching, contents of pages, click-signals, etc). It just hasn't been applied to people as far as I know.

* Page rank produces stable rankings, despite, in theory, pages having the ability to change links at any time; and also despite significant vested interests fighting for the top link. If we think of this applied to people, changing links corresponds to changing who you trust on what. And we can engineer the system to create whatever kind of friction we want. E.g. We can require people to read things written by the people who represent them. This design space allows to imagine all kinds of "viscosity" for democratic liquid-esk-representation.

* The main attacks to game page-rank are by creating new web pages. This can't be done as easily with people. So we might expect liquid trust to be more robust to gaming than web-search. Managing identity is not without challenges, but it is much easier than managing existence of web pages.

* There's an interesting concept called h-index from citation measurement: the idea is that scientists get points for publications and they get points for other scientists citing their publication. A scientist's h-index score is the largest number of papers they wrote that have the most citations (or more). Writing 7 papers that are cited 3 times, and 1 cited 11 times, gets you a score of 3. Writing 4 papers cited 23 times, and 10 papers cited twice gets you a score of 4.  We could apply the same idea to liquid trust to avoid it becoming a popularity contest: make sure the network isn't too star shaped.

No comments: