Sunday, March 24, 2019

Есть два основных способа голосования - стратегическое и идеологическое

Заметка полностью.

Я заранее прошу прощения за нравоучительный тон и много букв, но это потому что достали с вопросами и не выспалась а не только потому что я овца что безусловно факт.

Итак, насчет праздника демократии. Медленно и печально. Есть два основных способа голосования - стратегическое и идеологическое (есть еще третий но про него позже).

Ниже есть продолжение.

Стратегическое голосование - это когда у избирателя есть определенная цель - например сменить Биби (или наоборот дать ему досидеть пока не ляжет). Руководствуясь этой целью, избиратель начинает расставлять солдатиков, берет калькулятор, считает в столбик, пишет алгоритмы, ходит к гадалке, спрашивает внуков - то есть ищет способ сделать так, чтобы его голос максимально приблизил его к этой цели. Более подкованный "стратег" будет считать мандаты мелких партий и строить схемы, менее подкованный - следить за опросами. Но это не суть важно.

Плюс такого подхода - есть шанс добиться того что условного Биби сменит условный Вася (или наоборот избиратель сдохнет раньше чем Биби). Минусов больше (хотя я понимаю, что общий вес может от этого не зависеть).
Во-первых, помните Ливни. Я знаю, что вы все хотите ее забыть, но пересильте себя и все же держите в голове те выборы на которых она выиграла но не смогла создать коалицию.

Во-вторых, опросы. Про них уже все сказано, но - опросы врут. Не специально. Не сильно. Не всегда. Но об этом нужно помнить.
В-третьих, никакие подсчеты (ну я не для математических гениев пишу тут, да?) не гарантируют стабильность стратегическим образом созданной коалиции. Помните про маятник он адская сука.

В-четвертых, про Васю. Вася 2019 высок, мужественен и хорош собой. Он за все хорошее против всего плохого. Для меня это уже большой плюс, потому что Биби - полная противоположность, я б не вдула ему даже в свои самые легкомысленные периоды жизни, а его моральному облику позавидует даже золушкина мачеха. Но первая проблема в том, что вдуть Васе вам скорее всего все равно не светит, так что попробуйте все же не руководствоваться этим архиважным аргументом. А вторая проблема - знаете ли вы что-нибудь про Васю? В койке вам это может и не помешает, но см пункт первый. Итак, про Васю мы не знаем ничего, но зато мы знаем Васиных друзей и как минимум можем руководствоваться старинной пословицей. Первый друг Васи известен нам лучше всего, мы с ним долго плавали. У него в отличие от остальной компании есть совершенно четкая идеология которую он не скрывает. Мы давно знаем, что он 1) правый ликудник, 2) сторонник силового но максимально сдержанного подхода, 3) за статус-кво (как с палестинцами так и внутри страны). Второго друга Васи мы тоже неплохо изучили и знаем, что он по натуре своей популист, однако не совсем лишенный некоторых принципов и идеологии в том, что касается статуса-кво. Итак, изучив эту троицу, можно без особого напряга прийти к выводу, что "левоцентристским" этот союз можно назвать только с сильного перепугу (именно так и происходит). Так что не бойтесь и делайте свои выводы.

Идеологическое голосование - это когда у избирателя есть простите какая-то идеология. (Напомню, что любовь к легким наркотикам - не идеология). Например, избиратель мечтает в процессе трансфера вырезать всех арабов, и голосует за того, кто даст ему пистолет. Или же избиратель топит за гражданские браки, и голосует за того, с кем можно вместе сплести веночек, надеть бурку и пойти на прайд.

Минус такого голосования в том, что 1) голос может пропасть, и это реально жопа, 2) в короткой перспективе это может не повлиять на положение условного Биби.

Плюсов на мой взгляд больше (но мы помним про удельный вес, да?).

Во-первых, вы даете шанс партии, которая представляет вашу идеологию, влиять на события. Да, из оппозиции на них влиять трудно, но не факт что партия будет в оппозиции, а даже если будет, то не надо забывать собственно о важности идеологической оппозиции для демократического процесса.

Во-вторых, вы даете этой партии как минимум шанс выжить (что опять же очень важно для демократического процесса).
В-третьих, вы усиливаете того крупного кандидата, который стоит к вашей партии ближе второго. Что может положительно повлиять на общую картину.

И в-четвертых, что наиболее важно и чаще всего забывается, вы остаетесь верны себе и не будете чувствовать что нашли себя скажем на помойке - а это крайне важно для демократического процесса всей страны.

Однако есть третий способ голосования, к которому чаще всего прибегают не от хорошей жизни. Это вопрос того, кто тебя наиболее сильно представляет. Поскольку я не главред и не политический обозреватель, и у меня нет служебных и моральных ограничений (досуг и мораль вещи несовместные), я могу рассказать о себе. Идеологически довольно однозначно моя партия - МЕРЕЦ. Я такой нормальный левак с левацкими убеждениями и левацкой мотивировкой. Но не левацкой биографией. И вот тут зарыта даже не собака, а сдохнувший от ожирения динозавр. Идеологически МЕРЕЦ меня представляет, но он никоим образом не представляет меня как Аллу, "русскую", с 14 лет работавшую на уборках и в типографиях, разведенную мать у которой нет никакой помощи в виде бабушек и дедушек, живущую всю жизнь в съемных развалюхах в прошлом питерскую а сейчас гиватаймскую довольно активную раздолбайку (один из немногих феминитивов которые меня не раздражают). Мало того, МЕРЕЦ, туманно воображая мою биографию, и не стремится меня представлять потому что справедливо полагает, что лучше пиздеть чем мешки ворочать в том смысле что есть более доступные принцессы. Поэтому я периодически отклоняюсь от идеологического голосования и прибегаю к голосованию по принципу "своя рубашка ближе к телу", если нахожу такую рубашку. Вспомните первые выборы Лапида - это был не "раклобиби", а "я представляю средний класс и ебись все конем". Именно поэтому я за него тогда голосовала, прекрасно представляя все минусы (и кстати не жалею). В этот раз я выбираю идеологическое голосование (что вообще как вам уже наверное понятно считаю наиболее правильной тактикой). Но голосовать за МЕРЕЦ я буду так сказать "с тяжелым сердцем". И знаю, что таких как я есть еще какое-то количество и ничего веселого в этом нет, потому что они, представляя твою идеологию, при этом не представляют тебя лично, потому что ты им просто неинтересен.

Подводя итог этой бесполезной простыни - мне кажется, что если бы больше избирателей с любых сторон политической карты (да, я знаю что стреляю себе в ногу, но даже редактор по досугу остается в какой-то степени журналистом) голосовали бы идеологически и мелкие сейчас но не популистские партии как справа так и слева стали бы больше весить, партия власти (любая) могла бы принести гораздо больше пользы - и не только своему лагерю, но и прости господи я уже почти заткнулась демократическому процессу.
https://www.facebook.com/alla.gavrilov.1/posts/10219132834657631

Скончался Рафи Эйтан

23 марта в возврасте 92 лет скончался бывший министр по делам пенсионеров из старейших сотрудников Моссада Рафи Эйтан...

...В субботу, 23 марта, в возрасте 92 лет скончался Рафи (Рафаэль) Эйтан – один из создателей израильской разведки, известный тем, что принял активное участие в операции по захвату нацистского преступника Адольфа Эйхмана...

Ниже есть продолжение.

...Он родился 23 ноября 1926 года в кибуце Эйн-Харод. Его родители приехали в Палестину из России в 1922 году. В возрасте 12 лет Рафи вступил в ряды «Хаганы». Во время Второй мировой войны принимал участие в приёме нелегальных иммигрантов. Принимал участие в диверсионных акциях против англичан, в частности во взрыве радарной станции на горе Кармель.

В 1950 году Иссер Харель пригласил его работать в ШАБАК. В 1955 году вслед за Харелем перешёл на работу в Моссад и возглавил оперативный отдел.

...В мае 1960 года был руководителем оперативной группы «Моссада» при поимке в Аргентине нацистского преступника Адольфа Эйхмана...Эйтан принимал участие во многих секретных операциях, которые проводили спецслужбы Израиля, самой громкой из которой стал захват нацистского преступника, подполковника СС Адольфа Эйхмана, руководившего отделом гестапо IV B 4, отвечавшим за "окончательное решение еврейского вопроса"...В дальнейшем работал в Моссаде, возглавлял научно-технический отдел.

В 1972 году вышел в отставку и занимался бизнесом, в 1978 году по приглашению Ариэля Шарона вернулся на государственную службу и был назначен на пост советника премьер-министра по борьбе с терроризмом.

...В 1981 году был назначенюю на пост главы отдела по научным связям при оборонном ведомстве [спецслужба «Лакам»]. По сообщениям иностранных СМИ, отдел занимался промышленным и научным шпионажем, а также добывал ядерные технологии. В период, когда Эйтан возглавлял отдел, был завербован Джонатан Поллард.

Напомним, что Поллард, приговоренный к пожизненному заключению за шпионаж в пользу Израиля, был освобожден из американской тюрьмы в ноябре 2015 года - он провел за решеткой 30 лет.

Когда шпионаж Полларда в пользу Израиля был раскрыт, между США и еврейским государством разгорелся серьезный дипломатический конфликт. В 1985 году Эйтан вышел в отставку с поста главы отдела в минобороне и взял всю ответственность за вербовку и работу с Поллардом на себя...

Эйтан вернулся в бизнес в области безопасности, где преуспел благодаря личным связям с главой Кубы Фиделем Кастро.

21 ноября 1985 года в Вашингтоне был арестован аналитик военно-морской разведки США Джонатан Поллард, который оказался израильским шпионом, работавшим на «Лакам».

После отставки с поста руководителя «Лакам» Эйтан возглавил концерн «Химикалим Ле-Исраэль». После ухода в отставку с этого поста в 1990 году занимается предпринимательской деятельностью.

...В 2006 году возглавил партию пенсионеров...ГИЛЬ («Гимлаим исраэлим ла-кнесет»)... и прошел в кнессет 17 созыва...Партия получила 7 мандатов в парламенте... Был назначен на пост министра по делам пенсионеров в правительстве, которое возглавлял Эхуд Ольмерт.

Глава правительста Биньямин Нетаниягу поставил в своем Facebook пост, в котором выразил соблезнование в связи с кончиной Рафи Эйтана.

«Я и моя супруга Сара скорбим вместе со всем народом Израиля в связи с кончиной национального героя Рафи Эйтана, светлая ему память.

Рафи был легендой разведслужб Государства Израиль, участвовал в бесчисленном количестве операций и внес огромный вклад в безопасность Израиля. Он был участником операции по поимке нацистского преступника Адольфа Эйхмана и представления его справедливому суду в Израиле, в Иерусалиме.

Рафи Эйтан был общественным деятелем, министром израильского правительства и занимался вопросом возврата еврейского имущества, отобранного в годы Холокоста.

Он был близким другом нашей семьи. Не было равных его мудрости, чувству юмора и бесконечной преданности народу Израиля и нашей стране. Мы оплакиваем его уход».
https://cursorinfo.co.il/all-news/skonchalsya-rafi-ejtan/
https://www.vesty.co.il/articles/0,7340,L-5483074,00.html

Cooperative Multiple Inheritance in Python: Practice (English)

It is not strickly required to read, theoretical part. The main result of it is provided here. But, If you do read theoretical part, you can skip examples, provided here.

The last example illustrate how to implement mixin in Python.


Theoretically, not for any inheritance graph $C^3$ Linearization Algorithm can be linearized. In practice, it is rarely happen, and if it did, Python will emit en Error. Another name of this algorithm is MRO - Method Resolution Order.

That's right, the Diamond problem can be solved (practically). We can use inheritance where both object has the state, and this will not cause any problem. No more virtual inheritance (as C++ did), or multiple interface inheritance (as Java did).


There is more below.
Ниже есть продолжение.


Note: We will examine below only so called new-style classes. All classes in Python 3 are new style. Everything listed here holds in Python 2 if there no classic style class involve in inheritance graph.

Note: In Python order in which classes appear in inheritance tuple is matter. That's it, if we have $class X(A,B)$, and in X there is call to super(), Python will check class A first, and then, may be after checking in some other classes (if appropriate method will not be found), it will eventually check in class B.

MRO has following basic properties:

a) Children precede their parents.
b) If a class inherits from multiple classes, they are kept in the order specified in the tuple of the base class.

It also have following properties:

* Suppose, you have new-style class definition $class X(Y_1, Y_2,,..Y_N): pass$, then MRO of X will start from X [see, proposition (0.0)]

* object has to be last in (any) MRO consists of new-style objects only. [see, proposition (2.0)]

* If we have MRO of some node $A_1$, than we chose some node in that MRO, it's MRO will be subchain of $A_1$ in MRO. [see, proposition (2.1)]

* Let G be some inheritance graph with at least one node. Let chose some node $A_1$. Let MRO of $A_1$ be $L[A_1].
If we define new class $B(A_1, A_2...A_n)$, that is class that inherits from n classes, then MRO of B will be $L[B]=B+A_1+...$.

[see, proposition (2.2)]

This means that if we have some general form class $B(A_1, A_2...A_n)$ the MRO of it will be always B, than first class in the inheritance tuple. In particular if inherit from single class, case $n=1$, the MRO will starts from B and than (single) parent class $A_1$. More over, when $n=2$ the MRO will starts from B and then first class in the inheritance tuple $A_1$. There is no guarantee that the next class will be $A_2$, actually it can be it's sub-class, see Example 2 below.

* You can always ask $X.__mro__$ and avoid explicit computing.


Example 1


Based on https://www.python.org/download/releases/2.3/mro/

Suppose, we have following class hierarchy:

$class D(O): pass$

$class E(O): pass$

$class B(D, E): pass$

where $O$ denotes $`object`$.

We can visualize the inheritance graph as follows:




By proposition (0.0) $L[B(D, E)]=B...$. By proposition (2.0) "the last class is `object` $L[B(D, E)]=...,`object`=O$. So, we know, that

$L[B(D, E)]=B,...,O$

We assume, that MRO exists, so we have 2 possibilities

1) $L[B(D, E)]=B, D, E, O
2) $L[B(D, E)]=B, E, D, O

(D, E) appears in tuple inherits of B. By) b) the order that they appears should be preserved, so D should go before E, so option 2) is impossible, that's why the answer is

$L[B(D, E)]=B, D, E, O



Example 2.

Let's look on another example. Python have standard library class Dict. There is also OrderDict that extends Dict. Python have also Counter class. Counter inherit from Dict. It means, that it count have many time specific character appear in the string, but the order that it will print out the result is unspecified.

Let's define custom class

$class OrderedCounter(Counter, OrderDict): pass$

It appears, that we can actually change the "inheritance chain" of the Counter, it will appeared as we dynamically change the immediate parent of Counter from Dict to OrderDict. More precisely, we can change MRO of Counter.

As you can see in the picture, we have diamond inheritance problem here.



We will not calculate full MRO, but we will put enough constaint on it to establish that first object in the MRO of OrderedCounter is Counter and OrderDict goes before Dict.


So, we have to prove:

I. MRO of OrderedCounter starts from OrderCounter, Counter.
II. object has to be last in MRO.
III. OrderDict goes before Dict in MRO of OrderedCounter.

I. By proposition (0.0), MRO of OrderedCounter starts from OrderCounter. OrderCounter has 2 direct children Counter and OrderDict. By a) "children precede their parents" The first element in MRO has to be Counter or OrderDict. From b) "order specified in the inheritance tuple preserves" follows that Counter has to go before OrderDict, because they list in the inheritance tuple of OrderCounter. So, Counter has to first element in MRO.

II. By proposition (2.0)

III.
So, currently we established that

(#) L[OrderCount]=OrderCounter, Counter,...,object.

We have following options:

1) L[OrderCount]=OrderCounter, Counter, Dict, OrderDict, object
2) L[OrderCount]=OrderCounter, Counter, OrderDict, Dict, object

there is no other possible that will not contradict (#) and consists of objects of given inheritance graph. Let's write (1) in a bit explicit way:

1) L[OrderCount]=OrderCounter(Counter, OrderedDict), Counter(Dict), Dict(object), OrderDict(Dict), object

On one side, Dict is listed before OrderDict, on other side, we have OrderDict(Dict) and by a) "Children precede their parents" it follows that Dict should be listed before OrderDict. Contradiction.

Assuming that the given graph is linerizable (Python doesn't emit error), it has to be 2), that is

L[OrderCount]=OrderCounter, Counter, OrderDict, Dict, object

and OrderDict is called before Dict in MRO.

***

The exact MRO algorithm is provided in theortical part. In practice, the best chance, you will never used it. First of all, you can ask the Python do it for you. $X.__mro__$ will give you result of applying Method Resolution Order algorithm. If MRO for your inheritance graph can't be computed Python will emit an Error.

But, what is more important, in order to reason about MRO it is sufficient to know about properties listed above. We just did it in 2 examples above.

***

For more elaborate example see https://en.wikipedia.org/wiki/C3_linearization or here https://www.python.org/download/releases/2.3/mro/

Raymond Hettinger claims, that using (cooperative) multiple inheritance in Python make use of Dependecy Injection unnecessary. You can watch his video here "Super considered super!" https://www.youtube.com/watch?v=EiOglTERPEo and read his blogpost here https://rhettinger.wordpress.com/2011/05/26/super-considered-super/ (Python 3 syntax http://code.activestate.com/recipes/577720-how-to-use-super-effectively/). Note, however, he use incorrect definition of inheritance. Code re-use is the feature of mixin.

Multiple inheritance in Python is cooperative.

For reorderable method calls to work, the classes need to be designed cooperatively. This presents three easily solved practical issues:

1) the method being called by super() needs to exist
2) the caller and callee need to have a matching argument signature
3) and every occurrence of the method needs to use super() [Mark as bold is mine]

1) Let’s first look at strategies for getting the caller’s arguments to match the signature of the called method. This is a little more challenging than traditional method calls where the callee is known in advance. With super(), the callee is not known at the time a class is written (because a subclass written later may introduce new classes into the MRO [for example, see example 2 above).

One approach is to stick with a fixed signature using positional arguments. [For example]... to use a fixed signature of two arguments, a key and a value...

A more flexible approach is to have every method in the ancestor tree cooperatively designed to accept keyword arguments and a keyword-arguments dictionary, to remove any arguments that it needs, and to forward the remaining arguments using **kwds, eventually leaving the dictionary empty for the final call in the chain.

Each level strips-off the keyword arguments that it needs so that the final empty dict can be sent to a method that expects no arguments at all (for example, object.__init__() expects zero arguments):
...
2) Having looked at strategies for getting the caller/callee argument patterns to match, let’s now look at how to make sure the target method exists.

...We know that object has an __init__() method and that object is always the last class in the MRO chain, so any sequence of calls to super().__init__() is guaranteed to end with a call to object.__init__() method. In other words, we’re guaranteed that the target of the super() call is guaranteed to exist and won’t fail with an AttributeError.

For cases where object doesn’t have the method of interest (a draw() method for example), we need to write a root class that is guaranteed to be called before object. The responsibility of the root class is simply to eat the method call without making a forwarding call using super().

Root.draw() can also employ defensive programming using an assertion to ensure it isn’t masking some other draw() method later in the chain. This could happen if a subclass erroneously incorporates a class that has a draw() method but doesn’t inherit from Root.
...
If subclasses want to inject other classes into the MRO, those other classes also need to inherit from Root so that no path for calling draw() can reach object without having been stopped by Root.draw(). This should be clearly documented so that someone writing new cooperating classes will know to subclass from Root...

3) The techniques shown above assure that super() calls a method that is known to exist and that the signature will be correct; however, we’re still relying on super() being called at each step so that the chain of delegation continues unbroken. This is easy to achieve if we’re designing the classes cooperatively – just add a super() call to every method in the chain.

The three techniques listed above provide the means to design cooperative classes that can be composed or reordered by subclasses.
...
Occasionally, a subclass may want to use cooperative multiple inheritance techniques with a third-party class that wasn’t designed for it (perhaps its method of interest doesn’t use super() or perhaps the class doesn’t inherit from the root class). This situation is easily remedied by creating an adapter class that plays by the rules.

For example, the following Moveable class does not make super() calls, and it has an __init__() signature that is incompatible with object.__init__(), and it does not inherit from Root:
...
If we want to use this class with our cooperatively designed ColoredShape hierarchy, we need to make an adapter with the requisite super() calls:


Python 3 syntax:
# ------- Getting the argument signatures to match --------------

class Shape:
    def __init__(self, *, shapename, **kwds):
        self.shapename = shapename
        super().__init__(**kwds)

class ColoredShape(Shape):
    def __init__(self, *, color, **kwds):
        self.color = color
        super().__init__(**kwds)

cs = ColoredShape(color='red', shapename='circle')

# -------- Making sure a root exists ----------------------------

class Root:
    def draw(self):
        # the delegation chain stops here
        assert not hasattr(super(), 'draw')

class Shape(Root):
    def __init__(self, *, shapename, **kwds):
        self.shapename = shapename
        super().__init__(**kwds)
    def draw(self):
        print('Drawing.  Setting shape to:', self.shapename)
        super().draw()

class ColoredShape(Shape):
    def __init__(self, *, color, **kwds):
        self.color = color
        super().__init__(**kwds)
    def draw(self):
        print('Drawing.  Setting color to:', self.color)
        super().draw()

ColoredShape(color='blue', shapename='square').draw()
print('-' * 20)

# ------- Show how to incorporate a non-cooperative class --------

class Moveable:
    # non-cooperative class that doesn't use super()
    def __init__(self, x, y):
        self.x = x
        self.y = y
    def draw(self):
        print('Drawing at position:', self.x, self.y)

class MoveableAdapter(Root):
    # make a cooperative adapter class for Moveable
    def __init__(self, *, x, y, **kwds):
        self.moveable = Moveable(x, y)
        super().__init__(**kwds)
    def draw(self):
        self.moveable.draw()
        super().draw()

class MovableColoredShape(ColoredShape, MoveableAdapter):
    pass

MovableColoredShape(color='red', shapename='triangle', x=10, y=20).draw()

http://code.activestate.com/recipes/577720-how-to-use-super-effectively/

Python 2 syntax:

# ------- Getting the argument signatures to match --------------

class Shape(object):
    def __init__(self, shapename, **kwds):
        self.shapename = shapename
        super(Shape, self).__init__(**kwds)

class ColoredShape(Shape):
    def __init__(self, color, **kwds):
        self.color = color
        super(ColoredShape, self).__init__(**kwds)

cs = ColoredShape(color='red', shapename='circle')

# -------- Making sure a root exists ----------------------------

class Root(object):
    def draw(self):
        # the delegation chain stops here
        assert not hasattr(super(Root, self), 'draw')

class Shape(Root):
    def __init__(self, shapename, **kwds):
        self.shapename = shapename
        super(Shape, self).__init__(**kwds)
    def draw(self):
        print 'Drawing.  Setting shape to:', self.shapename
        super(Shape, self).draw()

class ColoredShape(Shape):
    def __init__(self, color, **kwds):
        self.color = color
        super(ColoredShape, self).__init__(**kwds)
    def draw(self):
        print 'Drawing.  Setting color to:', self.color
        super(ColoredShape, self).draw()

ColoredShape(color='blue', shapename='square').draw()
print '-' * 20

# ------- Show how to incorporate a non-cooperative class --------

class Moveable(object):
    # non-cooperative class that doesn't use super()
    def __init__(self, x, y):
        self.x = x
        self.y = y
    def draw(self):
        print 'Drawing at position:', self.x, self.y

class MoveableAdapter(Root):
    # make a cooperative adapter class for Moveable
    def __init__(self, x, y, **kwds):
        self.moveable = Moveable(x, y)
        super(MoveableAdapter, self).__init__(**kwds)
    def draw(self):
        self.moveable.draw()
        super(MoveableAdapter, self).draw()

class MovableColoredShape(ColoredShape, MoveableAdapter):
    pass

MovableColoredShape(color='red', shapename='triangle', x=10, y=20).draw()
https://rhettinger.wordpress.com/2011/05/26/super-considered-super/

Example 3
This example shows use of (cooperative) multiple inheritance to provide mixin. See separate article on what is mixin https://www.toalexsmail.com/2019/03/mixin-diamond-problem-inheritance.html.

Based on https://en.wikipedia.org/wiki/Mixin#In_Python

In Python mixin implemented by using of inheritance.

In Python, the SocketServer module has both a UDPServer class and a TCPServer class. They act as servers for UDP and TCP socket servers, respectively. Additionally, there are two mixin classes: ForkingMixIn and ThreadingMixIn. Normally, all new connections are handled within the same process. By extending TCPServer with the ThreadingMixIn as follows:

class ThreadingTCPServer(ThreadingMixIn, TCPServer):
  pass

the ThreadingMixIn class adds functionality to the TCP server such that each new connection creates a new thread. Alternatively, using the ForkingMixIn would cause the process to be forked for each new connection. Clearly, the functionality to create a new thread or fork a process is not terribly useful as a stand-alone class.

In this usage example, the mixins provide alternative underlying functionality without affecting the functionality as a socket server.



Mixin, the Diamond problem, inheritance (English)


The inheritance mechanism is a core component of the Object Oriented Programming paradigm. Inheritance means that a class "inherits" another class's definition and type. Inheritance express IS-A relationship. A dog IS-A pet, a car IS-A vehicle.


I want to emphases it again: code-reuse is not a goal for inheritance. Code-reuse is a goal of mixin (see below). In Python mixin implemented by using of (cooperative multiple) inheritance (see below).

There is more below.
Ниже есть продолжение.



The "diamond problem" is an ambiguity that arises when two classes B and C inherit from A, and class D inherits from both B and C. If there is a method in A that B and C have overridden, and D does not override it, then which version of the method does D inherit: that of B, or that of C?



https://en.wikipedia.org/wiki/Multiple_inheritance#The_diamond_problem

It is called the "diamond problem" because of the shape of the class inheritance diagram in this situation. In this case, class A is at the top, both B and C separately beneath it, and D joins the two together at the bottom to form a diamond shape.

Mixin Wikipedia defines a mixin as "a class that provides a certain functionality to be inherited by a subclass, but is not meant to stand alone. Inheriting from a mixin is not a form of specialization but is rather a means to collect functionality. A subclass may even choose to inherit most or all of its functionality by inheriting from one or more mixins through multiple inheritance. A mixin can also be viewed as an interface with implemented methods."

Inheritance defines an "is a" relationship between classes. A dog is 'is a' pet, a car "is a" vehicle — a car is a specialization of a vehicle. A mixin, on the otherhand, is a means of sharing functionality among a series of related classes that do not inherit from the same base class. A mixin allows you to reuse code without implying a relationship among the classes. It also allows you to get around the Single Inheritance model a bit to do this.

I'm starting with JavaScript example, because its implement mixin by actually copying methods from mixin to object, that is good mental model for that mixin is.

All other examples (with exception of C#) use inheritance to implement mixin that is theoretically wrong, because mixin don't have IS-A semantic.


Mixin in JavaScript
Based on https://en.wikipedia.org/wiki/Mixin#In_JavaScript

It is technically possible to add behavior to an object by binding functions to keys in the object. However, this lack of separation between state and behavior.

First way - using extend(obj, mixin) function.

'use strict';

const Halfling = function (fName, lName) {
  this.firstName = fName;
  this.lastName = lName;
};

const mixin = {
  fullName() {
    return this.firstName + ' ' + this.lastName;
  },
  rename(first, last) {
    this.firstName = first;
    this.lastName = last;
    return this;
  }
};

// An extend function
const extend = (obj, mixin) => {
  Object.keys(mixin).forEach(key => obj[key] = mixin[key]);
  return obj;
};

const sam = new Halfling('Sam', 'Loawry');
const frodo = new Halfling('Freeda', 'Baggs');

// Mixin the other methods
extend(Halfling.prototype, mixin);

console.log(sam.fullName());  // Sam Loawry
console.log(frodo.fullName());  // Freeda Baggs

sam.rename('Samwise', 'Gamgee');
frodo.rename('Frodo', 'Baggins');

console.log(sam.fullName());  // Samwise Gamgee
console.log(frodo.fullName());  // Frodo Baggins

Second way - Object.assign(obj, mixin)

'use strict';

// Creating an object
const obj1 = {
  name: 'Marcus Aurelius',
  city: 'Rome',
  born: '121-04-26'
};

// Mixin 1
const mix1 = {
  toString() {
    return `${this.name} was born in ${this.city} in ${this.born}`;
  },
  age() {
    const year = new Date().getFullYear();
    const born = new Date(this.born).getFullYear();
    return year - born;
  }
};
// Mixin 2
const mix2 = {
  toString() {
    return `${this.name} - ${this.city} - ${this.born}`;
  }
};

//  Adding the methods from mixins to the object using Object.assign()
Object.assign(obj1, mix1, mix2);

console.log(obj1.toString());   // Marcus Aurelius - Rome - 121-04-26
console.log(`His age is ${obj1.age()} as of today`);  // His age is 1897 as of today


Java with version below 8 Pre Java-8 Java Object's doesn't have multiple inheritance. The standard way to simulate multiple inheritance in Java is delegation / composition / decorator.

Delegation:

class A {
  private B b = new B();
  private C c = new C();

  public void method1() {
    b.methodB();
  }


  public void method2() {
    b.methodB();
  }
}

public class Driver {
  public static void main(String[] args){
    A a=new A();
    a.method1();
    a.method2();
  }
}

Decorator:
class A {
  private B b;
  private C c;

  public A(B b, C c){
    this.b=b;
    this.c=c;
  }

  public void method1() {
    b.methodB();
  }


  public void method2() {
    b.methodB();
  }
}

public class Driver {
  public static void main(String[] args){
    B b=new B();
    C c=new C();
    A a=new A(b, c);

    a.method1();
    a.method2();
  }
}

The ultimate example of such technique is BufferedOutputStream. Ability to buffer stream modeled in Java as Object (because "'everything' in Java is object"). Actually, it should be mixin. This is orthogonal concept. For example, this ability is also used in BufferedInputStream.

Note: Pre Java-8 has multiple inheritance with interfaces. Class can implements multiple interfaces (interface can also extends another interfaces). The ambiguity that raise with method name collision is very limited. Anyway, the collision is between two (or more) purely abstract method. Provided, that interface has no state, it doesn't really matter which method wins (in order to the class to be usable, you should provided implemetaion of such method in the class anyway, and such implementation will be used for all purely abstract methods). In particular, if we have diamond problem, the problem is with purely abstract methods, so the issue is very limited as described above.

Java in version 8 and above.
Java 8 interface has limited support for multiple inheritance in interfaces. You can have actual behavior there (by default (non-abstract) methods; also you can hive static methods, but this has nothing to do with inheritance), but you can't have state there (no data-members allowed). So, while technically you have diamond problem, the method resolution rules was specifically crafted in such a way that there is no ambiguate on which class to call. State appears only when you have class that implemented these interfaces. If he provides his own implementation, it's implementation wins, if didn't, implementation of one the interfaces will be used (Java doesn't take in consideration the order of the specified interfaces in the implements clause), but since interface don't have the state, it doesn't matter which particular interface will be used.

Note: Java 8 interfaces are still stateless.

With the introduction of default methods in Java 8, it is now possible for a class to inherit the same method from multiple places (such as another class or interface). The following rules can be used to determine which method is selected in such cases:

1. A class or superclass method declaration always takes priority over a default method
2. Otherwise, the method with the most specific default-providing interface is used
3. Finally, if the methods are equally specific, there will be a compiler error and you will be forced to explicitly override the method and specify which one your class should call
...

public interface A {
  default void name() {
    System.out.println("A");
  }
}

public interface B extends A {}

public interface C extends A {}

public class D implements B, C {
  public static void main(final String... args) {
    new D().name();
  }
}

Answer: A

The sub-interfaces B and C haven't overridden the method, so there is actually only the method from A to choose from. As a side note, if either B or C (but not both) had overridden the method, then Rule 2 would have applied. By the way, this is the diamond problem.

https://fahdshariff.blogspot.com/2016/06/java-8-default-method-resolution-rules.html

If rule 1 and rule 2 fail to determine which method to use, than by rule 3 we should add method in the class that implements there interfaces, that will provide explicit call to one of the interfaces. See the link above for the detail explanation and examples of these rules.



Scala
Based on https://en.wikipedia.org/wiki/Mixin#In_JavaScript

...As their name reveals, Traits are usually used to represent a distinct feature or aspect that is normally orthogonal to the responsibility of a concrete type or at least of a certain instance. For example, the ability to sing is modeled as such an orthogonal feature: it could be applied to Birds, Persons, etc.

Note: Scala's trait can have state. Scala support multiple inheritance of traits. There are stricked rule on how mulptile inheritance is resolved. Java 8 interfaces are trait--.

trait Singer{
  def sing { println(" singing ...") }
  //more methods
}

class Birds extends Singer


Here, Bird has mixed in all methods of the trait into its own definition as if class Bird had defined method sing() on its own.

As extends is also used to inherit from a super class, in case of a trait extends is used if no super class is inherited and only for mixin in the first trait. All following traits are mixed in using keyword with.

trait Performer{
  def perform{ println(" performing... ") }
  //more methods
}


class Person
class Actor extends Person with Singer
class Actor extends Singer with Performer

C# 3.0 can also simulate mixin by using extension method, see Implementing Mixins with C# Extension Methods.

Kotlin can simulate mixin in both way, as Java 8 interfaces and as extension methods.

Python
Based on https://en.wikipedia.org/wiki/Mixin#In_Python

In Python mixin implemented by using of inheritance.

In Python, the SocketServer module has both a UDPServer class and a TCPServer class. They act as servers for UDP and TCP socket servers, respectively. Additionally, there are two mixin classes: ForkingMixIn and ThreadingMixIn. Normally, all new connections are handled within the same process. By extending TCPServer with the ThreadingMixIn as follows:

class ThreadingTCPServer(ThreadingMixIn, TCPServer):
  pass

the ThreadingMixIn class adds functionality to the TCP server such that each new connection creates a new thread. Alternatively, using the ForkingMixIn would cause the process to be forked for each new connection. Clearly, the functionality to create a new thread or fork a process is not terribly useful as a stand-alone class.

In this usage example, the mixins provide alternative underlying functionality without affecting the functionality as a socket server.




Cooperative Multiple Inheritance in Python: Theory (English)

$C^3$ Linearization Algorithm was first published at the 1996. It was adapted to the Open Dylan implementation in January 2012. It has been chosen as the default algorithm for method resolution in Python 2.3 (and newer).

Another name of this algorithm is MRO - Method Resolution Order.

Theoretically, not for any inheritance graph $C^3$ Linearization Algorithm can be linearized (see example in Appendix A). Linearization is more restricted version of topological sort of the graph. In practice, it is rarely happen, and if it did, Python will emit en Error.

That's right, the Diamond problem can be solved (practically). We can use inheritance where both object that has the state, and this will not cause any problem. No more virtual inheritance (as C++ did), or multiple interface inheritance (as Java did).

Note: We will examine below only so called new-style classes. All classes in Python 3 are new style. Everything listed here holds in Python 2 if there no classic style class involve in inheritance graph.

Algorithm
Suppose, you have class definition $class X(Y_1, Y_2,,..Y_N): pass$, then

(*) $L[X(Y_1, Y_2,...,Y_N)] = X + L[(Y_1,..,Y_N)] = X + merge(L[Y_1], L[Y_2],...,L[Y_N], Y_1, Y_2,...,Y_N)$
$L(\emptyset)=\emptyset$, where $\emptyset$ denotes empty list, and + means list concatenation. Note: () also means empty list.

In English, Let $X$ be the class that extends classes $Y_1$, $Y_2$,...,$Y_N$. Then, linearization of class $X$ is (recursively) defined as list with head X and tail is merge procedure of the result of the (recursive call of) the algorithm on $Y_1$, $Y_2$,...,$Y_N$-these are first $N$ terms inside merge-following by $Y_1$, $Y_2$,...,$Y_N$.

The merge is done by selecting the first head of the lists which does not appear in the tail (all elements of a list except the first) of any of the lists. Note, that a good head may appear as the first element in multiple lists at the same time, but it is forbidden to appear anywhere else. The selected element is removed from all the lists where it appears as a head and appended to the output list. The process of selecting and removing a good head to extend the output list is repeated until all remaining lists are exhausted. If at some point no good head can be selected, because the heads of all remaining lists appear in any one tail of the lists, then the merge is impossible to compute due to inconsistent orderings of dependencies in the inheritance hierarchy and no linearization of the original class exists (abnormally).


There is more below.
Ниже есть продолжение.



Example 1

Based on https://www.python.org/download/releases/2.3/mro/

Suppose, we have following class hierarchy:

$class D(O): pass$

$class E(O): pass$

$class B(D, E): pass$

where $O$ denotes $object$.

We can visualize the inheritance graph as follows:



$L[B(D, E)]=B+merge(L[D], L[E], D, E)=B+merge(D+L[O], E+L[O], D, E]=$

$=B+merge(D+O+L[\emptyset], E+O+L[\emptyset], D, E]=$

$=B+merge(D+O+\emptyset, E+O+\emptyset, D, E]=$

$=B+merge(D+O, E+O, D, E)=B+D+merge(O, E+O, \emptyset, E)=$

We can't pull O, E+0 disagree.

$=B+D+E+merge(O,O, \emptyset, \emptyset)=B+D+E+O$

Note, that we did some computations twice. We can do it more efficiently.

Shorter way.
Let's make calculation by-level. Let's calculate $L[0], L[D], L[E], L[B]$.

$L[O]=O+L[()]=O+L[\emptyset]=O+\emptyset=O$
$L[D(O)]=D+L[O]=D+O+merge(L[())],())]=D+O+merge(\emptyset, \emptyset)=D+O+\emptyset=D+O$


that is $L[D(O)]=D+O.

$L[E[0]]=L[E(O)]=E+L[O]=E+O+merge(L[())],())]=E+O+merge(\emptyset,\emptyset=E+O$

that is $L[E[0]]=E+O$

$L[B(D, E)]=B+merge(L[D], L[E], D, E)=${applying calculation above}$=B+merge(D+O, E+O, D, E)=$

We see that D is a good head, therefore we take it.

$=B+D+merge(O,E+O,\emptyset,E)=$

Now O is not a good head, since it is in the tail of the sequence E+O. In this case, we have to skip to the next list. Then we see that E is a good head; we take it and we are reduced to compute $merge(O,O, \emptyset, \emptyset)$ which gives O.

$=B+D+E+merge(O,O, \emptyset, \emptyset)=B+D+E+O$.

Q.E.D

Now, when we have some filling how algorithm work, let's prove some propositions.

First of all, Let's rewrite (*) explicitly for cases that we have 0,1,2 immediate parents, $n=0,1,2$.

(1.0) $L[X]=X+L[()]=X+L[\emptyset]=X+\emptyset=X$
(1.1) $L[X(Y)]=X+L[Y]=X+merge(L(Y), Y]$
(1.2) $L[X(Y_1, Y_2)]=X+merge(L[Y_1], L[Y_2], Y_1, Y_2)$

Proposition 0.0
Suppose, you have class definition $class X(Y_1, Y_2,,..Y_N): pass$, then MRO of X will start from X.


Proof.
We should prove that $L[X]=X+...$

It directly follows from (*).

$L[X(Y_1, Y_2,...,Y_N)] = X + L[(Y_1,..,Y_N)]=X+...$

Q.E.D

$C^3$ linearization algorithm has following constraints:

0. The parents MRO's remain consistent: Inheritance graph determines the structure of method resolution order.
1. The local MRO's remain consistent, i.e., the super class will appear only after the method of the local classes will appear.
2. Monotonicity: An MRO is said to be monotonic if C1 precedes C2 in linearization of C, then C1 precedes C2 in the linearization of any subclass of C.

or rephrasing it more concisely:

a) Children precede their parents.
b) If a class inherits from multiple classes, they are kept in the order specified in the tuple of the base class.

See Appendix B for the prove how there properties follows from the $C^3$ linearization algorithm.


Proposition 1.0. (uniqueness)

1) If $C^3$ linearization algorithm produce linearization $X_1,...,X_n$,
than this order is unique.

Prove

Let's assume, that we have two different linearization lists: $X_1,...,X_n$ and $Y_1,...,Y_n$. Let denote by $X_i$ the last element where $X_j=Y_j$ (that is for $j<=i$ $X_j=Y_j$).

We have inheritance graph, this means that we have (finite) DAG. Applying formula (*) recursively we will travel on inheritance graph from childs to parent. By proposition (0.0) $X_1$=$Y_1$.

Let's look when $X_i$ was pulled out from merge (any other $X_i$, $Y_i$, i>1 was pulled out from merge). $X_1,...,X_n$ was produced by $C^3$ linearization algorithm, this means, it means that this merge procedure was resolved successfully, so after it was pulled out, there is no $X_j$ or $j<=i$ left in the merge procedure. So, when we do pulling out we have exactly $MRO_1$ of $X_i$ without any $X_j$ $j<i$.

$Y_1,...,Y_n=X_1,...,X_{i-1}, X_i,Y_{i+1},...,Y_n}$ was also produced by $C^3$ linearization algorithm. Let's look when $X_i$ was pulled out. $X_1,...,X_{i-1}, X_i,Y_{i+1},...,Y_n}$ was produced by $C^3$ linearization algorithm, this means, it means that this merge procedure was resolved successfully, so after it was pulled out, there is no $X_j$ or $j<=i$ left in the merge procedure. So, when we do pulling out we have exactly $MRO_2$ of $X_i$ without any $X_j$ $j<i$.

Because we have some finite graph, exists some $X_i_1$, $X_i_2$ that are listed in $MRO_1$ is some order and in $MRO_2$ in reverse order. Exists (first) merge stage when $X_i_1$ was pulled out by $MRO_1$, $X_i_2$ was pulled out by $MRO_2$.

Let's look on the merge structure:

$merge(L_1, L_2,...,L_N, L_(N+1}, L_(N+2},...,L_{2N})$

There are 3 different options, in each case we will have contradiction:

1) $X_i_1$, $X_i_2$ belong to some $L_j$.
2) $X_i_1$ is head of $L_j$, $X_i_1$ is head of $L_k$
3) $X_i_1$ is tail of $L_j$, $X_i_1$ is tail of $L_k$

1) $X_i_1$, $X_i_2$ belong to some $L_j$.

* if $X_i_1$ precedes $X_i_2$, than $X_i_2$ is in tail of $L_j$. So, we have contradiction with pulling out $X_i_2$ by $MRO_2$ (we can't pull out if exists tail with $X_i_2$).
* if $X_i_2$ precedes $X_i_1$, than $X_i_1$ is in tail of $L_j$. So, we have contradiction with pulling out $X_i_1$ by $MRO_1$ (we can't pull out if exists tail with $X_i_1$).

2) $X_i_1$ is head of $L_j$, $X_i_1$ is head of $L_k$

We can pull either $X_i_1$ or $X_i_2$ but not both (we're pulling out by the order of lists). We have contradiction.

3) $X_i_1$ is tail of $L_j$, $X_i_1$ is tail of $L_k$

We can't pull not $X_i_1$ not $X_i_2$ by merge procudure (ther're in the tail!). We have contradiction.

Q.E.D.

Proposition 2.0
`object` has to be last in (any) MRO.

Proof.
Let's assume, by contradiction, that exists some object X before object `object` in some MRO. This something (X) has `object` as (possibly indirect) parent. By b) parent's go after their children, so `object` has to go after X. Contradiction with our assumption. Q.E.D.

Proposition 2.1
If we have MRO of some node $A_1$, than we chose some node in that MRO, it's MRO will be subchain of $A_1$ in MRO.

Proof

Let G be some inheritance graph with at least one node. Let chose some node $A_1$. MRO of $A_1$ (if exists) will define some order on graph G

$L[A_1]=A_1,A_2...,A_n$


We have inheritance graph, this means that we have (finite) DAG. Applying formula (*) recursively we will travel on inheritance graph from childs to parent. For each node $A_i$ exists stage in the $C^3$ linearization algorithm in which exists merge procedure where $A_i$ is listed as list of single elements (DAG is connected graph, we're traveling from child to it's parent). Because, by assumption, $C^3$ linearization algorithm produce linearization $A_1,...,A_n$, it means that this merge procedure was resolved successfully, so our $A_i$ was pulled out on some stage (S), after it was pulled out, there is no $A_i$ left in the merge procedure. Each $A_j$, for $j<i$ was pulled out earlier, so stage (S) contain exactly MRO of $A_i$ written explicitly.

(As I show above it can't contain any $A_j$, for $j<i$. If the reminder will produce list that differ from MRO of $A_i$ this will means that exists $A_i_i$, $A_i_2$ that in MRO of $A_1$ are listed in one order and in MRO of $A_i$ in reverse order. This contradicts uniqenss of MRO (see proposition 1.0).

Q.E.D.


Proposition 2.2
Let G be some inheritance graph with at least one node. Let chose some node $A_1$. Let MRO of $A_1$ be $L[A_1].
If we define new class $B(A_1, A_2...A_n)$, that is class that inherits from n classes, then MRO of B will be $L[B]=B+A_1+...$.

Proof

We have inheritance graph, this means that we have (finite) DAG. Applying formula (*) recursively we will travel on inheritance graph from childs to parent.

$L[B(A_1, A_2,...,A_N)] = X + L[(A_1,..,A_N)]=X + merge(L[A_1], L[A_2],...,L[A_N], A_1, A_2,...,A_N)=X + merge(A_1+merge(...), L[A_2],...,L[A_N], A_1, A_2,...,A_N)$

We will prove that A_1 can be safely pull out from the merge. A_1 is head of first list, so this is the first candidate for pulling out. Last n element $A_1,...,A_N$ consist of 1 element, so they can't prevent from pulling out $A_1$. So, it is sufficient to prove that L[A_2],...,L[A_N] don't prevent pulling out.

Let's assume, by contradiction, that exists $i=2,..,N$ such that L[A_i] prevent for $A_1$ to be pulled out.

By property b) $A_1$ precceds $A_i$.

By merge procedure definition, this means, that tail of L[A_i] contains $A_1$. By constraint 0) this means that $A_i$ is subclass of $A_1$, so by property a) $A_i$ precceds $A_1$. This contradicts uniqenss of MRO ( proposition 1.0).

Q.E.D

Proposition 2.3 Let $n=1$ in Proposition 2.2 , than we get following proposition:

Let G be some inheritance graph with at least one node. Let chose some node $A_1$. Let MRO of $A_1$ be $L[A_1].
If we define new class $B(A_1)$, that is class that inherits from n classes, then MRO of B will be $L[B]=B+A_1+...$.

Q.E.D.

That is, we proved that if inherit from single class case $n=1$ in Proposition 2.2, the MRO will starts from B and than (single) parent class $A_1$.

Proposition 2.4 Let $n=2$ in Proposition 2.2 , than we get following proposition:
Let G be some inheritance graph with at least one node. Let chose some node $A_1$. Let MRO of $A_1$ be $L[A_1].
If we define new class $B(A_1, A_2)$, that is class that inherits from 2 classes, then MRO of B will be $L[B]=B+A_i+...$.

Q.E.D.

That is, we proved that if inherit from two classes, case $n=2$ in Proposition 2.2. the MRO will starts from B and then first class in the inheritance tuple $A_1$. There is no guarantee that the next class will be $A_2$, actually it can be it's sub-class, see Example 2 below.


Example 1-Indirect way


Based on https://www.python.org/download/releases/2.3/mro/

Suppose, we have following class hierarchy:

$class D(O): pass$

$class E(O): pass$

$class B(D, E): pass$

where $O$ denotes $`object`$.

We can visualize the inheritance graph as follows:



Now, lets find L[B(D, E)] without actual calculation.


By proposition (0.0) $L[B(D, E)]=B...$. By proposition (2.0) "the last class is `object` $L[B(D, E)]=...,`object`=O$. So, we know, that

$L[B(D, E)]=B,...,O$

We assume, that MRO exists, so we have 2 possibilities

1) $L[B(D, E)]=B, D, E, O
2) $L[B(D, E)]=B, E, D, O

(D, E) appears in tuple inherits of B. By) b) the order that they appears should be preserved, so D should go before E, so option 2) is impossible, that's why (we assumes that MRO exists and by proposition 1.0 it's unique) the answer is

$L[B(D, E)]=B, D, E, O

Q.E.D


Example 2.

Let's look on another example. Python have standard library class Dict. There is also OrderDict that extends Dict. Python have also Counter class. Counter inherit from Dict. It means, that it count have many time specific character appear in the string, but the order that it will print out the result is unspecified.

Let's define custom class

$class OrderedCounter(Counter, OrderDict): pass$

As you can see in the picture, we have diamond inheritance problem here.




We will look why this actually works in 2 different ways, by direct application of MRO algorithm and using only properties a) b).

Indirect approach

We will not calculate full MRO, but we will put enough constaint on it to establish that first object in the MRO of OrderedCounter is Counter and OrderDict goes before Dict.


So, we have to prove:

I. MRO of OrderedCounter starts from OrderCounter, Counter.
II. object has to be last in MRO.
III. OrderDict goes before Dict in MRO of OrderedCounter.

I. By proposition (0.0), MRO of OrderedCounter starts from OrderCounter. OrderCounter has 2 direct children Counter and OrderDict. By a) "children precede their parents" The first element in MRO has to be Counter or OrderDict. From b) "order specified in the inheritance tuple preserves" follows that Counter has to go before OrderDict, because they list in the inheritance tuple of OrderCounter. So, Counter has to first element in MRO.

II. By proposition (2.0)

III.
So, currently we established that

(#) L[OrderCount]=OrderCounter, Counter,...,object.

We have following options:

1) L[OrderCount]=OrderCounter, Counter, Dict, OrderDict, object
2) L[OrderCount]=OrderCounter, Counter, OrderDict, Dict, object

there is no other possible that will not contradict (#) and consists of objects of given inheritance graph. Let's write (1) in a bit explicit way:

1) L[OrderCount]=OrderCounter(Counter, OrderedDict), Counter(Dict), Dict(object), OrderDict(Dict), object

On one side, Dict is listed before OrderDict, on other side, we have OrderDict(Dict) and by a) "Children precede their parents" it follows that Dict should be listed before OrderDict. Contradiction.

Assuming that the given graph is linerizable (Python doesn't emit error), it has to be 2), that is

L[OrderCount]=OrderCounter, Counter, OrderDict, Dict, object

and OrderDict is called before Dict in MRO.

Q.E.D.



Direct MRO computation 1.


$L[OrderedCounter(Counter, OrderDict)]=OrderCounter+merge(L[Counter(Dict)], L[OrderDict(Dict)], Counter, OrderDict)=$

$=OrderCounter+merge(Counter+merge(L[Dict(Object)], Dict), OrderDict+merge(L[Dict(object)], Dict]), Counter, OrderDict)=$

$=OrderCounter+merge(Counter+merge(Dict+merge(L[Object], Object), Dict), OrderDict+merge(Dict+merge(L[object], object), Dict)), Counter, OrderDict)=$

$=OrderCounter+merge(Counter+merge(Dict+Object, Dict), OrderDict+merge(Dict+object, Dict)), Counter, OrderDict)=$

$=OrderCounter+merge(Counter+Dict+object, OrderDict+Dict+object, Counter, OrderDict)=$

$=OrderCounter+Counter+merge(Dict+object, OrderDict+Dict+object, \emptyset, OrderDict)=$

Note, we can't take Dict from the merge. $OrderDict+Dict+object$ don't agree.

$=OrderCounter+Counter+OrderDict+merge(Dict+object, Dict+object, \emptyset, \emptyset)=$

$=OrderCounter+Counter+OrderDict+Dict+merge(object, object\emptyset, \emptyset)=$

$=OrderCounter+Counter+OrderDict+Dict+object$

That is $L[OrderedCounter(Counter, OrderDict)]=OrderCounter+Counter+OrderDict+Dict+object$

Note, that OrderDict is listed before Dict in MRO.

Q.E.D.

Too complicate, let's try another way.

Direct MRO computation 2 (shorter way).

We will go level by level from root, downwards.

$L[object]=object$ by (1.0)
$L[Counter(Dict)]=Counter+merge(L[Dict(object)]=Counter+merge(Dict+merge(L[object])=Counter+merge(Dict+object)=Counter+Dict+object$
$L[OrderDict(Dict)]=OrderDict+merge(L[Dict(object)]=OrderDict+merge(Dict+merge(L[object])=OrderDict+merge(Dict+object)=OrderDict+Dict+object$

Then,

$L[OrderedCounter(Counter, OrderDict)]=OrderCounter+merge(L[Counter(Dict)], L[OrderDict(Dict)], Counter, OrderDict)=$

$=OrderCounter+merge(Counter+Dict+object, OrderDict+Dict+object, Counter, OrderDict)$

Note, we can't take Dict from the merge. $OrderDict+Dict+object$ don't agree.

$=OrderCounter+Counter+OrderDict+merge(Dict+object, Dict+object, \emptyset, \emptyset)=$

$=OrderCounter+Counter+OrderDict+Dict+merge(object, object\emptyset, \emptyset)=$

$=OrderCounter+Counter+OrderDict+Dict+object$

That is $L[OrderedCounter(Counter, OrderDict)]=OrderCounter+Counter+OrderDict+Dict+object$

Note, that OrderDict is listed before Dict in MRO.

As you can see, we did the same calculation, but such calculation order enables us not to make the same computation twice.

Q.E.D.

For more elaborate example see https://en.wikipedia.org/wiki/C3_linearization or here https://www.python.org/download/releases/2.3/mro/



****************

Appendix A - Example of correct inheritance graph that $C^3$ Linearization Algorithm fail to linearize.

$class X(object)$
$class Y(object)$
$class P(X, Y)$
$class Q(Y, X)$ (the problem is here, inconsistent orderings of parents with the line above)
$class Z(P, Q)$

Inheritance graph representation:



Note: This (finite) directly acyclic graph (DAG). So, it can be topologically sorted. But $C^3$ fails to linearize it. For example, Z+P+Q+X+Y is possible topologically sorted (but it violates properties a) "Children precede their parents" and b) "order specified in the inheritance order preserved" (we listed X before Y, but Q has inheritance tuple (Y, X)).

$L[X]=X$
$L[Y]=Y$
$L[P(X,Y)]=P+merge(L[X], L[Y], X, Y)=P+merge(X, Y, X, Y)=P+X+Y$
$L[Q(Y,X)]=Q+merge(L[Y], L[X], Y, X)=Q+merge(Y, X, Y, X)=Q+Y+X$
$L[Z(P,Q)]=Z+merge(L[P], L[Q], P, Q)=Z+merge(P+X+Y, Q+Y+X, P, Q)=$
$=Z+P+merge(X+Y, Q+Y+X, \emptyest, Q)=$

X can't be pulled out, Q+Y+X doesn't agree.

$=Z+P+Q+merge(X+Y, Y+X, \emptyest, \emptyest)=$

X can't be pulled out from first list, Y+X doens't agree.
Y can't be pulled out from second list,X+Y doens't agree.

We're stuck. No good head can be selected, because the heads of all remaining lists appear in any one tail of the lists. The merge is impossible to compute due to inconsistent orderings of dependencies in the inheritance hierarchy. According to the merge procedure, $C^3$ Linearization Algorithm stops with failure.

Q.E.D


***

Appendix B
prove that properties follows from the $C^3$ linearization algorithm.


$C^3$ linearization algorithm has following constraints:

(0.0) The parents MRO's remain consistent: Inheritance graph determines the structure of method resolution order.
(0.1) The local MRO's remain consistent, i.e., the super class will appear only after the method of the local classes will appear.
(0.2) Monotonicity: An MRO is said to be monotonic if C1 precedes C2 in linearization of C, then C1 precedes C2 in the linearization of any subclass of C.

or rephrasing it more concisely:

a) Children precede their parents.
b) If a class inherits from multiple classes, they are kept in the order specified in the tuple of the base class.


Prove
Inheritance graph means that we have (finite) DAG.

Let chose some node X. It's class definition is of form $class X(Y_1, Y_2,,..Y_N)$, then

(*) $L[X(Y_1, Y_2,...,Y_N)] = X + L[(Y_1,..,Y_N)] = X + merge(L[Y_1], L[Y_2],...,L[Y_N], Y_1, Y_2,...,Y_N)$
$L(\emptyset)=\emptyset$.

where $Y_1, Y_2,...,Y_N$ are parents of X (this follows from definition of X).

The merge is done by selecting the first head of the lists which does not appear in the tail (all elements of a list except the first) of any of the lists. Note, that a good head may appear as the first element in multiple lists at the same time, but it is forbidden to appear anywhere else. The selected element is removed from all the lists where it appears as a head and appended to the output list. The process of selecting and removing a good head to extend the output list is repeated until all remaining lists are exhausted. If at some point no good head can be selected, because the heads of all remaining lists appear in any one tail of the lists, then the merge is impossible to compute due to inconsistent orderings of dependencies in the inheritance hierarchy and no linearization of the original class exists.

Applying formula (*) recursively we will travel on inheritance graph from childs to parent. So (0.0) holds.


Prove of (0.1) and a) We want to prove that in X precedes every node in L[X].

(*) $L[X(Y_1, Y_2,...,Y_N)] = X + merge(L[Y_1], L[Y_2],...,L[Y_N], Y_1, Y_2,...,Y_N)$

Merge procedure ensures that when we pull out one element there, it doesn't remain in any list inside merge (we're pulling only from head when there is no tail that contain this class). Merge consists only from the lists of the parent's (immediate and distant) of X (if X has another children the algorithm will not visit them and X will be listed in the final list only once (we pull out X first, and merge doesn't contain X)). L[Y_1] is listed before Y_1, L[Y_2] is listed before Y_2,...,L[Y_n] is listed before Y_n, that ensure that we're pulling out node from merge we can't pull (immedate or distant) parent of $Y_i$ before we're pulling $Y_i$ ($Y_i$ will disagree; this is the reason we have merge not only with $L[Y_i]$, but also with $Y_i$). So X will be listed before any (immediate and distant) parent of X, on other words, we prove that X precedes every node in L[X].

Prove of (0.2) and b) We want to prove that in If a class inherits from multiple classes, they are kept in the order specified in the tuple of the base class

Let chose some node X. It is enough to prove that $Y_1, Y_2,...,Y_N$ preserves order (if X has another children the algorithm will not visit them and X is random).

Let's look on (*) again. X is listed first and he will not be pulled from merge (by a)). So it is enough to look on

(#1) $merge(L[Y_1], L[Y_2],...,L[Y_N], Y_1, Y_2,...,Y_N)$

and prove that merge procedure ensure that $Y_1, Y_2,...,Y_N$ preserves order.

Note: This is the reason we list all $L[Y_i]$ before all $Y_i$.

Let's assume, by contradiction, that exists $Y_i_1$, $Y_i_2$, such that $i_1<i_2$, but $Y_i_2$ is pulled out before $Y_i_1$ in some merge procedure.

There are 3 cases:

1) $Y_i_1$ is disjoint to $Y_i_2$
2) $Y_i_1$ is parent of $Y_i_2$
3) $Y_i_2$ is parent of $Y_i_1$

1) $Y_i_1$ is disjoint to $Y_i_2$
$Y_i_1$ not in any L[Y_j], so $Y_i_1$ will be pulled out before $Y_i_2$. Contradiction.

2) $Y_i_1$ is parent of $Y_i_2$

$Y_i_2$ is in L[$Y_i_1$].

Contradiction with 0).

3) $Y_i_2$ is parent of $Y_i_1$
According to property a) $Y_i_1$ is pulled out before $Y_i_2$. Contradiction.

Q.E.D.


***

Appendix C - Time complexity, halting of algorithm, coorectness
Algorithm
Suppose, you have class definition $class X(Y_1, Y_2,,..Y_N): pass$, then

(*) $L[X(Y_1, Y_2,...,Y_N)] = X + L[(Y_1,..,Y_N)] = X + merge(L[Y_1], L[Y_2],...,L[Y_N], Y_1, Y_2,...,Y_N)$
$L(\emptyset)=\emptyset$.

The merge is done by selecting the first head of the lists which does not appear in the tail (all elements of a list except the first) of any of the lists. Note, that a good head may appear as the first element in multiple lists at the same time, but it is forbidden to appear anywhere else. The selected element is removed from all the lists where it appears as a head and appended to the output list. The process of selecting and removing a good head to extend the output list is repeated until all remaining lists are exhausted. If at some point no good head can be selected, because the heads of all remaining lists appear in any one tail of the lists, then the merge is impossible to compute due to inconsistent orderings of dependencies in the inheritance hierarchy and no linearization of the original class exists.

Algorithm works only on (finite) directly acycled graph (DAG) G=(V,E), where V is notes of the graph, E - (directed) edges (from child to parent) of the graph. In DAG |E|=O(|V|). Let's denote |V|=C.


Lemma C.0
Each list in the merge procedure doesn't have repetitions.

Proof.
Let's assume, by contradiction, that there are at least one repetition. Applying formula (*) recursively we will travel on inheritance graph from childs to parent. So, if have repetition in list in the merge procedure, it means, that we hive cycle in the inheritance. But inheritance graph is DAG, so it contain no cycles.

Q.E.D.

Lemma C.1
List of all pulled out element from merge procedure doesn't have repetition.

Proof.
Let's assume, by contradiction, that there are at least 2 same elements $X$ that where pulled out from merge procedure. Let's look on first pulling out. When $X$ was pulled out, no more $X$ remain in the lists (if $X$ is the first element in the list, it is pulled out now, and if $X$ contain in some tail of any list, we can't pull it out by merge procedure. Because we did succeed in pulling, it means $X$ is not contain in any list in merge). So, we can't pull out $X$ again. Contradiction.

Q.E.D

Lemma C.2
Merge procedure have O(|V|) lists.

Proof.
Any node X can have O(|E|) classes in inheritance tuple. In the (finite) DAG O(|E|)=O(|V|). Merge list has O(|V|) element as number of the classes in inheritance tuple ($Y_1, Y_2,...,Y_N$) plus the same number of elements of there's MRO ($L[Y_1], L[Y_2],...,L[Y_N]$). In total its $O(|V|)+O(|V|)=O(|V|)$.

Q.E.D

Lemma C.3
If $C^3$ linearization algorithm produce linearization,
than each list in the merge procedure has O(|V|) elements.

Proof.
$C^3$ linearization algorithm is recursively described as
(*) $L[X(Y_1, Y_2,...,Y_N)] = X + merge(L[Y_1], L[Y_2],...,L[Y_N], Y_1, Y_2,...,Y_N)$ for some node X (we're computing MRO of X)

$C^3$ linearization algorithm pulls out element $X$ from merge procedure is a such a way, that after pulling out, $X$ is not contained in any list in the merge (if it did, and algorithm produce linearization, that $X$ will be pulled out twice in contradiction with lemma C.1). Such "polling out" can be done at most as the size of the largest list inside merge (we can't pull from the $\emptyset$ empty list).

We're pulling out one element at a time, at total we're pulling out $O(|V|)$ elements, so the size of the largest list inside merge has to be $O(|V|)$.

Q.E.D.




Proposition C.1 (time complexity).
Time complexity of the algorithm is $O(|V|^3)=O(C^3)$

Proof.

$C^3$ linearization algorithm is recursively described as
(*) $L[X(Y_1, Y_2,...,Y_N)] = X + merge(L[Y_1], L[Y_2],...,L[Y_N], Y_1, Y_2,...,Y_N)$ for some node X (we're computing MRO of X)

Prepending X takes O(1). By Lemma C.2 merge have $O(|V|)$ lists. We take head of the first list $H$ and consults with tails of all other lists, number of them is $O(|V|)$. In the worst case, we can't pull out $H$, so we go to the next list and make same check. In the worst case, we will make $O(|V|)$ some checks.

By Lemma C.3 first list has in worst case (if algorithm is stucked, than we have a less work to do; we're checking that linearization can be done, during regular run of the algorithm) $O(|V|)$ elements, so in the worst case, the pulling out first list will take O(|V|)*O(|V|)=O(|V|^2). By Lemma C.2 merge have $O(|V|)$ lists, so it will take $O(|V|)*O(|V|^2)=O(|V|^3)=O(C^3)$ to empty all list in the worst case scenario.

Q.E.D.

Proposition C.2. (halting)
$C^3$ linearization algorithm halts.

Proof.
If there is some stage in which we can't pulled out new element from merge procedure, the algorithm is explicitly halts.

$L[\emptyset]=\emptyset$ by definition.

Let's assume, that merge procedure always succeeds. Than it produce some list $x_1,...,x_{k-1}, x_k$. By lemma C.1. such list doesn't have repetitions.

(*) $L[X(Y_1, Y_2,...,Y_N)] = X + merge(L[Y_1], L[Y_2],...,L[Y_N], Y_1, Y_2,...,Y_N)$

So, pulling out of each element, starting from the second one, corresponds to the pulling out element from merge.


Let's assume by math. induction, that $C^3$ linearization algorithm halts for all lists with $k-1$ elements.

For k=0, we have $L[\emptyset]=\emptyset$ that holds by definition.

For k=1, we have $L[X]=X+merge(L[\emptyset],\emptyset)=X+merge(\emptyset, \emptyset)=X+\emptyset=X$ that holds.

Let's look on pulling out $x_k$. By math. induction assumption, producing $x_1,...,x_{k-1}$ halts ($X=x_1+x_2+...x_{k-1}+x_k)$. By assumption, the merge produce succeeds (and it produces $x_k$). Appending $X_k$ to the list $x_1,...,x_{k-1}$ doesn't halt, so list producing list $x_1,...,x_{k-1}, x_k$ doesn't halt.

Q.E.D.


Note: As you can see in appendix A, not every inheritance graph has linearization.

Proposition C.3. (correctness)

1) If $C^3$ linearization algorithm produce linearization $X_1,...,X_n$,
than it's contain all elements of inheritance graph and, there is no repetition there.

2) Linearization that satisfies constraint 0.0)-0.2) exists iff (<=>) merge procedure can pull out any element at some stage of the $C^3$ linearization algorithm.

Proof.

2) <=) If merge procedure can pull out any element at every stage of the $C^3$ linearization algorithm than Linearization that satisfies constraint 0.0)-0.2). Inheritance graph means that we have (finite) DAG. Applying formula (*) recursively we will travel on inheritance graph from childs to parent. So 0.0) holds. The order in the list inside merge satisfies 0.1) "local MRO remain consistant" (see Appendix B, prove of 0.1) and a)). If merge procedure succeed in the pulling out element, this means that there is no tail in another list that forbids it's, that is property 0.2) "monotonicity" is satisifed. 2 =>) We should prove, if merge procedure can't pull out any element at some stage of the $C^3$ linearization algorithm than there is no linearizarion that satisfies constraint 0.0)-0.2).

Merge procedure can't pull out any element, if at some point no good head can be selected, because the heads of all remaining lists appear in any one tail of the lists, this means that on hand in order to satisfy 0.1) "local MRO remain consistant" we should pull out one of the head from the merge, on another hand this will break 0.2) "monotonicity" (see example in Appendix A).


1) We should prove, If $C^3$ linearization algorithm produce linearization $X_1,...,X_n$,
than it's contain all elements of inheritance graph and, there is no repetition there.

Prove

Inheritance graph means that we have (finite) DAG. Applying formula (*) recursively we will travel on inheritance graph from childs to parent. For each node $X_i$ exists stage in the $C^3$ linearization algorithm in which exists merge procedure where $X_i$ is listed as list of single elements (DAG is connected graph, we're traveling from child to it's parent). Because, by assumption, $C^3$ linearization algorithm produce linearization $X_1,...,X_n$, it means that this merge procedure was resolved successfully, so our $X_i$ was pulled out on some stage, so $C^3$ linearization algorithm produce linearization $X_1,...,X_n$ contain $X_i$.

Let's prove that there is no repetitions. By Lemma C.2 list of all pulled out element from merge procedure doesn't have repetition, the result of the $C^3$ linearization algorithm produce linearization $X_1,...,X_n$ is concatenation from recursive merge procedure and the first element where $C^3$ linearization algorithm starts. $X_1$ can't contain on the merge (merge works with his (direct or indirect parents only).

Q.E.D.