# Paper reading (50):a novel matrix game with payoffs of maximal belt structure

Inge2022-06-23 18:03:48

# 0 introduce

## 0.1 subject

IJIS2019： The matrix game of maximum structural return (A novel matrix game with payoffs of maxitive belief structure)

## 0.2 background

Maximum confidence structure (maxitive belief structures,MBSs) To express imprecision and uncertainty in a specific structure (imprecise and uncertain) Information , So that it can model the nonspecificity and imprecision of probability at the same time (nonspecificity and imprecise).

## 0.3 Method

The proposed model can be used to describe the interaction between the two sides of the game , An effective method to solve the problem by reaching the equilibrium point is proposed .

## 0.4 Bib

@article{
Han:2019:690706,
author = {
Yu Zhen Han and Yong Deng},
title = {
A novel matrix game with payoffs of maxitive belief structure},
journal = {
International Journal of Intelligent Systems},
volume = {
34},
number = {
4},
pages = {
690--706},
year = {
2019},
doi = {
10.1002/int.22072}
}


# 1 Basic knowledge of

## 1.1 Dempster‐Shafer (D‐S) Confidence theory

To deal with uncertainty , Many mathematical tools have been proposed one after another , for example D Count 、Z Count , And fuzzy sets .D-S Also known as the theory of confidence D-S Theory or belief theory , By Dempster Put forward and be Shafer Expand .D-S Theory can be considered as a generalization of Bayesian probability theory , Because it needs to satisfy weaker constraints than Bayesian theory . Now ,D-S theory Because of its Can handle imprecise and uncertain information The ability of , It has been used in information fusion 、 Multilateral decision making (MCDM)、 pattern recognition , And risk analysis . Here are some basic concepts of this theory ：
Definition 1： Discrimination framework (Frame of discernment) Make Ω = { θ 1 , θ 2 , … , θ n } \Omega=\{\theta_1,\theta_2,\dots,\theta_n\} Means to contain n n A finite set of elements , as well as P ( Ω ) P(\Omega) Means to contain 2 N 2^N The power set of elements ：
P ( Ω ) = { ∅ , { θ 1 } , { θ 2 } , … , { θ n } , { θ 1 , θ 2 } , { θ 1 , θ 3 } , Ω } . (1) \tag{1} P(\Omega)=\{\emptyset,\{\theta_1\},\{\theta_2\},\dots,\{\theta_n\},\{\theta_1,\theta_2\},\{\theta_1,\theta_3\},\Omega\}. Definition 2： Benchmark probability distribution (Basic probability assignment, BPA) Power set P ( Ω ) P(\Omega) To value 0 0 - 1 1 Mapping between ：
m : P ( Ω ) → [ 0 , 1 ] , (2) \tag{2} m:P(\Omega)\to[0,1], It meets the following conditions ：
m ( ∅ ) = 0 , ∑ A ⊆ P ( Ω ) m ( A ) = 1. (3) \tag{3} m(\empty)=0,\qquad\sum_{A\subseteq P(\Omega)}m(A)=1. here m ( A ) m(A) Indicates that in practical application A The degree of confidence .
Definition 3： Certainty and rationality function (Belief and plausibility functions) Make sure that the function is defined as ：
Bel : P ( Ω ) → [ 0 , 1 ]  and Bel ( A ) = ∑ B ⊆ A m ( B ) , (4) \tag{4} \text{Bel}:P(\Omega)\to[0,1]\text{ and Bel}(A)=\sum_{B\subseteq A}m(B), Rationality function Pl Is defined as
Pl : P ( Ω ) → [ 0 , 1 ]  and Pl(A) = ∑ B ∩ A ≠ ∅ m ( B ) = 1 − Bel ( A ‾ ) (5) \tag{5} \text{Pl}:P(\Omega)\to[0,1]\text{ and Pl(A)}=\sum_{B\cap A\neq\empty}m(B)=1-\text{Bel}(\overline{A}) Bel ( A ) \text{Bel}(A) and Pl ( A ) \text{Pl}(A) Represent the lower bound and the upper bound respectively .
Definition 4：Dempster Associative law (Dempster combination rule) If m ( A ) > 0 m(A)>0 , be A A Called a focus element (focal element) , The set of all focus elements is called the body of evidence (body of evidence, BOE). When multiple independent BOEs When available , It can be used Dempster The law of association obtains combinatorial evidence (combined evidence)：
m ( A ) = ∑ B , C ⊆ Ω , B ∩ C = A m 1 ( B ) m 2 ( C ) 1 − K , (6) \tag{6} m(A)=\frac{\sum\limits_{B,C\subseteq\Omega,B\cap C=A}m_1(B)m_2(C)}{1-K}, among K = ∑ B ∩ C = ∅ m 1 ( B ) m 2 ( C ) K=\sum_{B\cap C=\emptyset}m_1(B)m_2(C) It's a conflict (conflict) Standardized constants for . The law of association is only if m ⊕ ( ∅ ) ≠ 1 m_\oplus(\empty)\neq1 Time makes sense ; Otherwise, the evidence presents a general confrontation , The binding law fails , How to deal with these conflicts will be an open question .

## 1.2 Maximum confidence structure MBS The definition of

And variables V V Associated finite space X = { x 1 , x 2 , … , x n } X=\{x_1, x_2,\dots,x_n\} The maximum confidence structure on M M from X X Of q q A nonempty subset consists of , Called focus element , Expressed as F i , i = 1 , 2 , … , n F_i,i = 1, 2,\dots,n , Its weight meets M ( F i ) ≥ 0 M(F_i)\geq0 And max ⁡ i M ( F i ) = 1 \max_iM(F_i)=1 . And classic D-S Compared with confidence theory , The sum of the focus elements is not necessarily equal to 1, Therefore, it can better adapt to practical applications .
hypothesis A A yes X X A subset of , Two and M M dependent , They are called upper bound probabilities (upper possibility) And the next probability (lower possibility) Important set functions are defined as follows ：
U M ( A ) = max ⁡ i , A ∩ F i ≠ ∅ { M ( F i ) } , L M ( A ) = max ⁡ i , F i ⊆ A { M ( F i ) } . \begin{aligned} U_{M}(A) &=\max _{i, A \cap F_{i} \neq \varnothing}\left\{M\left(F_{i}\right)\right\}, \\ L_{M}(A) &=\max _{i, F_{i} \subseteq A}\left\{M\left(F_{i}\right)\right\}. \end{aligned} Further, it can be expressed as
U M ( A ) = max ⁡ i [ M ( F i ) ∧ G i ( A ) ] , L M ( A ) = max ⁡ i [ M ( F i ) ∧ H i ( A ) ] , \begin{aligned} U_{M}(A) &=\max _{i}\left[M\left(F_{i}\right) \wedge G_{i}(A)\right], \\ L_{M}(A) &=\max _{i}\left[M\left(F_{i}\right) \wedge H_{i}(A)\right], \end{aligned} Among them, if A ∩ F i ≠ ∅ A\cap F_i\neq\empty , G i ( A ) = 1 G_i(A)=1 Or equal to 0 0 ; If F i ⊆ A F_i\subseteq A , be H i ( A ) = 1 H_i(A)=1 Or equal to 0 0 . Besides , G i ( A ) = max ⁡ i [ F i ( x i ) ∧ A ( x i ) ] G_i(A)=\max_i[F_i(x_i)\wedge A(x_i)] , among F i ( x i ) F_i(x_i) and A ( x i ) A(x_i) Is an indicator function . H i ( A ) H_i(A) It can also be rewritten as ：
H i ( A ) = 1 − max ⁡ i [ F i ( x i ) ∧ ( 1 − A ( x i ) ) ] . (9) \tag{9} H_i(A)=1-\max_i[F_i(x_i)\wedge(1-A(x_i))]. If A ∩ F i ≠ ∅ A\cap F_i\neq\empty , be G i ( A ) = 1 G_i(A)=1 otherwise = 0 =0 ; If F i ∈ A F_i\in A , be H i ( A ) = 1 H_i(A)=1 otherwise = 0 =0 .
Definition 5：MBS Associative law (Fusing rules of MBS) hypothesis M 1 M_1 and M 2 M_2 There are two variables that provide V V Two aspects of information MBS, And their respective focus elements are F 1 , F 2 , … , F n F_1,F_2,\dots,F_n and E 1 , E 2 , … , E q E_1,E_2,\dots,E_q . Their combination rules can be defined as follows ：
M ( F i ∩ E j ) = min ⁡ [ M 1 ( F i ) , M 2 ( E j ) ] k Yes On F i ∩ E j ≠ ∅ . (10) \tag{10} M(F_i\cap E_j)=\frac{\min[M_1(F_i),M_2(E_j)]}{k}\qquad about F_i\cap E_j\neq\empty. among k = max ⁡ F i ∩ E j ≠ ∅ min ⁡ [ M 1 ( F i ) , M 2 ( E j ) ] k=\max_{F_i\cap E_j\neq\empty}\min[M_1(F_i),M_2(E_j)] . This operation can be expressed as M = M 1 ⊥ M 2 M=M_1\perp M_2 , It means maximum integration . Be careful ⊥ \perp Have symmetry , namely M 1 ⊥ M 2 = M 2 ⊥ M 1 M_1\perp M_2=M_2\perp M_1 . k k Is the conflict coefficient , When k = 0 k=0 when , Two MBS Always in conflict , So you can't merge them . If there is F ∗ F^* and E ∗ E^* bring M 1 ( F ∗ ) ∩ M 2 ( E ∗ ) ≠ ∅ M_1(F^*)\cap M_2(E^*)\neq\empty and M 1 ( F ∗ ) = M 2 ( E ∗ ) = 1 M_1(F^*)=M_2(E^*)=1 , be
M ( F i ∩ E j ) = min ⁡ [ M 1 ( F i ) , M 2 ( E j ) ] Yes On F i ∩ E j ≠ ∅ . (11) \tag{11} M(F_i\cap E_j)=\min[M_1(F_i),M_2(E_j)]\qquad about F_i\cap E_j\neq\empty. It can be easily proved k k Is a discount factor (discounting coefficient) And max ⁡ M ( F ) = 1 \max M(F)=1 . further , The formula 11 Can be extended to more MBS The situation of . hypothesis M i , i = 1 , 2 , … , n M_{i}, i=1,2, \dots, n It contains focus elements F i j F_{i j} The multiple MBS. When conditions F 1 j 1 ∩ F 2 j 2 , … , ∩ F n j n ≠ ∅ F_{1 j_{1}} \cap F_{2 j_{2}}, \ldots, \cap F_{n j_{n}} \neq \varnothing When satisfied , Yes
M ( F 1 j 1 ∩ F 2 j 2 , … , ∩ F n j n ) = min ⁡ [ M 1 ( F 1 j 1 ) , M 2 ( F 2 j 2 ) , … , M n ( F n j n ) ] k , (12) \tag{12} M\left(F_{1 j_{1}} \cap F_{2 j_{2}}, \ldots, \cap F_{n j_{n}}\right)=\frac{\operatorname{min}\left[M_{1}\left(F_{1 j_{1}}\right), M_{2}\left(F_{2 j_{2}}\right), \ldots, M_{n}\left(F_{n j_{n}}\right)\right]}{k}, Among them M i M_{i} Is a pure probability distribution , Yes k = max ⁡ F 1 j ∩ F 2 j , … , ∩ F n j ≠ ∅ min ⁡ [ M 1 ( F 1 j 1 ) , M 2 ( F 2 j 2 ) , … , M n ( F n j n ) ] k=\max _{F_{1 j} \cap F_{2 j}, \ldots, \cap F_{n j} \neq \varnothing} \operatorname{min}\left[M_{1}\left(F_{1 j_{1}}\right), M_{2}\left(F_{2 j_{2}}\right), \ldots, M_{n}\left(F_{n j_{n}}\right)\right] .

## 2.3 Traditional matrix game with clear digital returns (Traditional matrix game with crisp number payoffs)

A game usually contains a set of multiple players and a limited set of strategies for each player . Make S A 1 = { s A 1 , s A 2 , … , s A m } S_{A1}=\{s_{A1},s_{A2},\dots,s_{Am}\} and S B 1 = { s B 1 , s B 2 , … , s B n } S_{B1}=\{s_{B1},s_{B2},\dots,s_{Bn}\} It means the player A A And players B B A pure set of policies . Make A = { a 1 , a 2 , … , a m } A=\{a_1,a_2,\dots,a_m\} and B = { b 1 , b 2 , … , b n } B=\{b_1,b_2,\dots,b_n\} It means the player A A and B B A mixed strategy . Make X = ( x i j ) m × n X=(x_{ij})_{m\times n} Express A A Of Payment matrix (payoff matrix) , Then triple ( { A } , { B } , X ) (\{A\},\{B\},X) Can represent two players Zero sum matrix game
A X B T = ∑ i = 1 m ∑ j = 1 n a i x i j b j . (13) \tag{13} AXB^T=\sum_{i=1}^m\sum_{j=1}^na_ix_{ij}b_j. In this case , The player B B The expected loss is − A X B T -AXB^T .
John von Neumann Proved Min max theorem (minimax theorem) Can handle matrix game problems , So as to find the ideal strategy of both players . Prove the reality , Players should seek to make their own Maximize the minimum expected return A mixed strategy , Or players should give priority to making themselves Expected maximum loss and minimum loss A mixed strategy . This theory can be described as a Linear programming (linear programming) problem ：
max ⁡ u \max u
s.t.  { ∑ i = 1 m x i j a i ≥ u , j = 1 , 2 , … , n , ∑ i = 1 m a i = 1 a i ≥ 0 , i = 1 , 2 , … , m (14) \tag{14} \text { s.t. }\left\{\begin{array}{l} \sum_{i=1}^{m} x_{i j} a_{i} \geq u, \quad j=1,2, \ldots, n, \\ \sum_{i=1}^{m} a_{i}=1 \\ a_{i} \geq 0, \quad i=1,2, \ldots, m \end{array}\right.
And min ⁡ v \min v
s.t.  { ∑ i = 1 n x i j b i ≤ v , j = 1 , 2 , … , m , ∑ i = 1 n b i = 1 , b i ≥ 0 , i = 1 , 2 , … , n . (15) \tag{15} \text { s.t. }\left\{\begin{array}{l} \sum_{i=1}^{n} x_{i j} b_{i} \leq v, \quad j=1,2, \ldots, m, \\ \sum_{i=1}^{n} b_{i}=1, \\ b_{i} \geq 0, \quad i=1,2, \ldots, n . \end{array}\right. The formula 14 and 15 It's actually Primal dual linear programming problem , It means u u The maximum value of is equal to v v The minimum value of . value Z = u = v Z=u=v Is the solution of the matrix game .
The following uses an example to understand more clearly how to use the min max theorem to solve the matrix game . Make ( S A , S B , X ) (S_A,S_B,X) Indicates the current game , Players A A There are three strategies , namely S A = { s A 1 , s A 2 , s A 3 } S_A=\{s_{A1},s_{A2},s_{A3}\} , The player B B There are four strategies , namely S B = { s B 1 , s B 2 , s B 3 , s B 4 } S_B=\{s_{B1},s_{B2},s_{B3},s_{B4}\} . The player A A The payment matrix is ：
A = ( 5 6 − 8 − 5 − 5 8 10 − 4 7 5 − 10 9 ) A=\left( \begin{array}{rrrr} 5&6&-8&-5\\ -5&8&10&-4\\ 7&5&-10&9 \end{array} \right) In the zero sum game , The player B B The payment matrix is − A -A . Make A = { a 1 , a 2 , a 3 } A=\{a_1,a_2,a_3\} and B = { b 1 , b 2 , b 3 , b 4 } B=\{b_1,b_2,b_3,b_4\} Respectively represent players A A and B B A mixed strategy . The balance of the game (equilibrium) Or the result can be calculated by formula 14 and 15 obtain ：
max ⁡ u \max u
s.t.  { 5 a 1 − 5 a 2 + 7 a 3 ≥ u , 6 a 1 + 8 a 2 + 5 a 3 ≥ u , − 8 a 1 + 10 a 2 − 10 a 3 ≥ u , − 5 a 1 − 4 a 2 + 9 a 3 ≥ u , a 1 + a 2 + a 3 = 1 , a 1 , a 2 , a 3 ≥ 0 \text { s.t. }\left\{\begin{array}{l} 5a_1-5a_2+7a_3\geq u,\\ 6a_1+8a_2+5a_3\geq u,\\ -8a_1+10a_2-10a_3\geq u,\\ -5a_1-4a_2+9a_3\geq u,\\ a_1+a_2+a_3=1,\\ a_1,a_2,a_3\geq0 \end{array}\right. And min ⁡ v \min v
s.t.  { 5 b 1 + 6 b 2 − 8 b 3 − 5 b 4 ≤ v − 5 b 1 + 8 b 2 + 10 b 3 − 4 b 4 ≤ v 7 b 1 + 5 b 2 − 10 b 3 + 9 b 4 ≤ v b 1 + b 2 + b 3 + b 4 = 1 b 1 , b 2 , b 3 , b 4 ≥ 0 (16) \tag{16} \text { s.t. }\left\{\begin{array}{l} 5 b_{1}+6 b_{2}-8 b_{3}-5 b_{4} \leq v \\ -5 b_{1}+8 b_{2}+10 b_{3}-4 b_{4} \leq v \\ 7 b_{1}+5 b_{2}-10 b_{3}+9 b_{4} \leq v \\ b_{1}+b_{2}+b_{3}+b_{4}=1 \\ b_{1}, b_{2}, b_{3}, b_{4} \geq 0 \end{array}\right. Through optimization , The balance will be in A ∗ = ( 0 , 0.5312 , 0.4687 ) A^*=(0,0.5312,0.4687) and B ∗ = ( 0.625 , 0 , 0.3750 , 0 ) B^*=(0.625,0,0.3750,0) achieve . At this time there is u = v = 0.6250 u=v=0.6250 .

# 3 be based on MBS Matrix game

This section proposes a method with MBS Matrix game of returns , It also considers disharmony (discord) And nonspecific (nonspecificity). The following first gives some definitions ：
Definition 6： The player A A Preference strategy A A , The player B B Preference strategy B B , It can be abbreviated as ：
MG ⁡ MBS = ( A , B , X ) , \operatorname{MG}_\text{MBS}=(A,B,X), among X = ( M i j ) m × n X=(M_{ij})_{m\times n} It's a player A Of MBS Payment matrix . therefore X X Each element of M i j M_{ij} It's all one MBS, It is affected by the discrimination framework Ω = { θ 1 , θ 2 , … , θ n } \Omega=\{\theta_1,\theta_2,\dots,\theta_n\} Constraints .
According to the definition 6, M i j M_{ij} It means the player A A Adopt pure strategy S A i S_{Ai} And players B B Adopt pure strategy S B j S_{Bj} Gain at (gain). The gain is determined by Ω \Omega Upper MBS Express . For the non specificity and disharmony contained in the game matrix , Rating θ 1 , θ 2 , θ n \theta_1,\theta_2,\theta_n It can be a linear language variable (linear linguistic variables), So as to include uncertainty in the game . Besides , In the case of hybrid strategy , For example, players A A Prefer mixed strategies A A , The player B B Prefer mixed strategies B B , Then the player A A The expected gain can be recorded as M x y M_{xy} , And it can be through matrix game MG MBS \text{MG}_\text{MBS} The following equation in calculates ：
M A B ( A ) = ∑ i = 1 m ∑ j = 1 n a i b j M i j ( A ) , A ∈ Ω . (17) \tag{17} M_{AB}(A)=\sum_{i=1}^m\sum_{j=1}^na_ib_jM_{ij}(A),\qquad A\in\Omega. Once in a zero sum game MBS The structural payment matrix is called the matrix , Then you can calculate the balance through the following steps .

## 3.1 Stage 1： Determine the relative effects of the discrimination framework

stay MG MBS \text{MG}_\text{MBS} in , Payment matrix passed MBS Express . However MBS But it can't be quantified . In the game , The increase or decrease of utility is measured by the gains and losses of players in matrix game . In this case , We adjust by quantifying changes in utility MBS. at present Ω \Omega The relative correspondence between two elements has been defined , In order to simplify the form of utility function , One Utility chain function (chain utility function) It's defined as ：
u ( θ i ) = f i , i − 1 ( u ( θ i − 1 ) ) , 2 ≤ i ≤ n , (18) \tag{18} u(\theta_i)=f_{i,i-1}(u(\theta_{i-1})),\qquad2\leq i\leq n, among u ( θ i ) u(\theta_i) yes θ i \theta_i The utility of . According to the formula 9, If Ω \Omega Its size is n n , be n − 1 n-1 Item should be considered . Based on this , The relative utility of any two elements can be obtained .

## 3.2 step 2： Eliminate nonspecificity and disharmony

stay MBS in , Non specificity and disharmony should be taken into account , Then it's difficult to deal with them at the same time . Regarding this , We eliminate the uncertainty of payment matrix based on ideal and extreme cases .MBS The utility of can be expressed as u ( M i j ) = [ u i j L , u i j R ] (19) \tag{19} u\left(M_{i j}\right)=\left[u_{i j}^{L}, u_{i j}^{R}\right] among
{ u i j L = ∑ θ ∈ Ω U M ( θ ) u ( θ ) = ∑ θ ∈ Ω max ⁡ i , θ ∩ F i ≠ ∅ { M ( F i ) } u ( θ ) , u i j R = ∑ θ ∈ Ω L M ( θ ) u ( θ ) = ∑ θ ∈ Ω max ⁡ i , F ∈ Θ ∈ { M ( F i ) } u ( θ ) . (20) \tag{20} \left\{\begin{array}{l} u_{i j}^{L}=\sum_{\theta \in \Omega} U_{M}(\theta) u(\theta)=\sum_{\theta \in \Omega} \max _{i, \theta \cap F_{i} \neq \varnothing}\left\{M\left(F_{i}\right)\right\} u(\theta), \\ u_{i j}^{R}=\sum_{\theta \in \Omega} L_{M}(\theta) u(\theta)=\sum_{\theta \in \Omega} \max _{i, F \in \Theta \in}\left\{M\left(F_{i}\right)\right\} u(\theta) . \end{array}\right. Intermediate value payment matrix (interval-valued payoff matrix) It can be calculated as ：
U = u ( M ) = ( [ u i j L , u i j R ] ) m × n . (21) \tag{21} U=u(M)=\left(\left[u_{i j}^{L}, u_{i j}^{R}\right]\right)_{m \times n} .

## 3.3 step 3： Calculate the equilibrium point

Some existing methods can be used to deal with MG MBS \text{MG}_\text{MBS} The transformed intermediate value matrix game ：
max ⁡ u ‾ \max \overline{u}
s.t.  { ∑ i = 1 m u i j a i ≥ u ˉ , j = 1 , 2 , … , n , u i j L ≤ u i j ≤ u i j R ∑ i = 1 m a i = 1 , a i ≥ 0 , i = 1 , 2 , … , m (22) \tag{22} \text { s.t. }\left\{\begin{array}{l} \sum_{i=1}^{m} u_{i j} a_{i} \geq \bar{u}, \quad j=1,2, \ldots, n, \\ u_{i j}^{L} \leq u_{i j} \leq u_{i j}^{R} \\ \sum_{i=1}^{m} \quad a_{i}=1, \\ a_{i} \geq 0, \quad i=1,2, \ldots, m \end{array}\right.
And min ⁡ v ‾ \min \underline{v}
s.t.  { ∑ i = 1 n u i j b i ≤ v ‾ , j = 1 , 2 , … , m u i j L ≤ u i j ≤ u i j R ∑ i = 1 n b i = 1 b i ≥ 0 , i = 1 , 2 , … , n . (23) \tag{23} \text { s.t. }\left\{\begin{array}{l} \sum_{i=1}^{n} u_{i j} b_{i} \leq \underline{v}, \quad j=1,2, \ldots, m \\ u_{i j}^{L} \leq u_{i j} \leq u_{i j}^{R} \\ \sum_{i=1}^{n} b_{i}=1 \\ b_{i} \geq 0, \quad i=1,2, \ldots, n . \end{array}\right.
The above formula can be converted to the following Two linear programming
max ⁡ u ˉ \max \bar{u}
s.t.  { ∑ i = 1 m u i j R a i ≥ u ˉ , j = 1 , 2 , … , n ∑ i = 1 m a i = 1 a i ≥ 0 , i = 1 , 2 , … , m (24) \tag{24} \text { s.t. }\left\{\begin{array}{l} \sum_{i=1}^{m} u_{i j}^{R} a_{i} \geq \bar{u}, \quad j=1,2, \ldots, n \\ \sum_{i=1}^{m} a_{i}=1 \\ a_{i} \geq 0, \quad i=1,2, \ldots, m \end{array}\right.
And min ⁡ v ‾ \min \underline{v}
s.t.  { ∑ i = 1 n u i j L b i ≤ v ‾ , j = 1 , 2 , … , m ∑ i = 1 n b i = 1 b i ≥ 0 , i = 1 , 2 , … , n . (25) \tag{25} \text { s.t. } \begin{cases}\sum_{i=1}^{n} & u_{i j}^{L} b_{i} \leq \underline{v}, \quad j=1,2, \ldots, m \\ \sum_{i=1}^{n} & b_{i}=1 \\ b_{i} \geq 0, & i=1,2, \ldots, n .\end{cases} according to Duality theory of linear programming , The above formula can be further converted to ：
min ⁡ v ˉ \min \bar{v}
s.t.  { ∑ i = 1 m u i j R b i ≤ v ˉ , j = 1 , 2 , … , m , ∑ i = 1 m b i = 1 , a i ≥ 0 , i = 1 , 2 , … , n (26) \tag{26} \text { s.t. }\left\{\begin{array}{l} \sum_{i=1}^{m} \quad u_{i j}^{R} b_{i} \leq \bar{v}, \quad j=1,2, \ldots, m, \\ \sum_{i=1}^{m} \quad b_{i}=1, \\ a_{i} \geq 0, \quad i=1,2, \ldots, n \end{array}\right.
And max ⁡ u ‾ \max \underline{u}
s.t.  { ∑ i = 1 n u i j L b i ≥ u ‾ , j = 1 , 2 , … , n , ∑ i = 1 n a i = 1 , b i ≥ 0 , i = 1 , 2 , … , m . (27) \tag{27} \text { s.t. } \begin{cases}\sum_{i=1}^{n} & u_{i j}^{L} b_{i} \geq \underline{u}, \quad j=1,2, \ldots, n, \\ \sum_{i=1}^{n} & a_{i}=1, \\ b_{i} \geq 0, & i=1,2, \ldots, m .\end{cases} By solving the linear programming problem , Can achieve balance ( a ‾ ∗ , b ‾ ∗ ) (\overline{a}^*,\overline{b}^*) and ( a ‾ ∗ , b ‾ ∗ ) (\underline{a}^*,\underline{b}^*) , They correspond to the formula 24、26、27, as well as 15 Solution .

## 3.4 solve MG MBS \text{MG}_\text{MBS} Value

In this section MG MBS \text{MG}_\text{MBS} Formally define the value on ：
Definition 7： Make MG MBS = ( A , B , X ) \text{MG}_\text{MBS}=(A,B,X) Represents a matrix game , ( a ‾ ∗ , b ‾ ∗ ) (\overline{a}^*,\overline{b}^*) and ( a ‾ ∗ , b ‾ ∗ ) (\underline{a}^*,\underline{b}^*) Is the equilibrium point of the matrix , be MG MBS \text{MG}_\text{MBS} The value in is V = [ V L , V R ] V=[V^L,V^R] , Its satisfaction
V L = u a ‾ ∗ b ‾ ∗ = ∑ θ ∈ Ω U M ( θ ) u ( θ ) = ∑ θ ∈ Ω max ⁡ i , θ ∩ F ≠ ∅ { M ( F i ) } u ( θ ) , V R = u a ~ ∗ b ˙ ∗ = ∑ θ ∈ Ω L M ( θ ) u ( θ ) = ∑ θ ∈ Ω max ⁡ i , F i ⊆ θ { M ( F i ) } u ( θ ) . (28) \tag{28} \begin{aligned} V^{L}=u_{\underline{a}^{*} \underline{b}^{*}} &=\sum_{\theta \in \Omega} U_{M}(\theta) u(\theta)=\sum_{\theta \in \Omega} \max _{i, \theta \cap F \neq \varnothing}\left\{M\left(F_{i}\right)\right\} u(\theta), \\ V^{R}=u_{\tilde{a}^{*} \dot{b}^{*}} &=\sum_{\theta \in \Omega} L_{M}(\theta) u(\theta)=\sum_{\theta \in \Omega} \max _{i, F_{i} \subseteq \theta}\left\{M\left(F_{i}\right)\right\} u(\theta) . \end{aligned} Final , The player A A Expected balance income or player B B The expected balance loss can be calculated by MG MBS \text{MG}_\text{MBS} present , Its value is MBS Disharmony and non specificity . Besides , When M a ‾ ∗ b ‾ ∗ = M_{\overline{a}^*\overline{b}^*}= M a ‾ ∗ b ‾ ∗ M_{\underline{a}^*\underline{b}^*} = M a ∗ b ∗ =M_{a^*b^*} , MG MBS \text{MG}_\text{MBS} The value of can be expressed as V = M a ∗ b ∗ V=M_{a^*b^*} .

# 4 Event analysis

This section adopts Territorial disputes To verify that the proposed method has MBS The effectiveness and efficiency of the income matrix game . Territorial sovereignty is indivisible , It can be said to be a zero sum game . Pictured 1 Shown , be known as ALPHA and BETA The two countries compete for territory at the same time Zeta. Because the two countries are on the brink of war , The United Nations sent a team of experts to evaluate some feasible policies to ease the pressure on the two countries .

chart 1： Territorial disputes

## 4.1 Game Analysis

This step is through analysis ALPHA and BETA The historical decisions and actions of the two countries to assess the potential strategies they may adopt . In this case , hypothesis ALPHA There are three possible strategies and BETA There are four possible strategies .
ALPHA The strategy is as follows ：
1） S A 1 S_{A1} ： And BETA Negotiate and make some compromises ;
2） S A 2 S_{A2} ： Allocate more troops to Zeta;
3） S A 3 S_{A3} ： Take some negative strategies .
BETA The strategy is as follows ：
1） The war ;
2） Local conflicts ;
3） Welfare policy
4） Military confrontation .

## 4.2 Decision analysis

Due to the imprecision and uncertainty in the changeable environment , The return under different strategy combinations cannot be determined as an accurate number . therefore ,MBS Used to express Uncertain payment . To simply represent the assessments collected from experts , Evaluation level First use the discrimination framework Ω \Omega To make sure . In particular , The value is uncertain information Language variables Constitute a discrimination framework ：
Ω = { Very Bad  ( V B ) , Bad  ( B ) , Fair  ( F ) , Good  ( G ) , Very Good  ( V G ) } . \Omega=\{\text{Very Bad }(VB),\text{Bad }(B),\text{Fair }(F),\text{Good }(G), \text{Very Good }(VG)\}. Once the language variables in the framework are identified , The relative utility between any pair of language variables , To limit them to uniform standards . In this case , in consideration of Ω \Omega The size is 5 5 , We set it up 10 10 Utility relations , That is, a set of relative utility functions is fixed to describe the utility relationship between language variables , As shown below ：
U ( V G ) / U ( G ) = 2 , U ( G ) / U ( F ) = 2 , U ( F ) / U ( B ) = 2 , U ( B ) / U ( V B ) = 2. (29) \tag{29} U(VG)/U(G)=2, U(G)/U(F)=2,U(F)/U(B)=2,U(B)/U(VB)=2.

## 4.3 Payment evaluation

In the game Payment matrix It can be built according to language evaluation and player strategy . Because the game is a zero one game , have only ALPHA perhaps BETA The payment matrix of is given . surface 1 The show is ALPHA Payment matrix , It should be noted that each assessment is represented as MBS Structure , To express nonspecificity and disharmony .

surface 1：ALPHA Payment matrix

## 4.4 solve MBS Matrix game

First , Through the formula 20 It can eliminate nonspecificity and disharmony ,MBS A matrix game can be transformed into an intermediate value payoff matrix , As shown in the table 2.

surface 2：ALPHA Intermediate value payment matrix

then , Based on the formula 29, We will watch 2 Convert to include only u ( V B ) u(VB) In the form of , As shown in the table 3.

surface 3：ALPHA Contains only VB Payment matrix

also , surface 3 The payment matrix in can be transformed into two clear payment matrices U L U^L and U R U^R
U L = [ 20.8 u ( V B ) 7.2 u ( V B ) 8 u ( V B ) 1 u ( V B ) 8 u ( V B ) 4 u ( V B ) 6.4 u ( V B ) 1.6 u ( V B ) 19.2 u ( V B ) 2 u ( V B ) 4 u ( V B ) 2 u ( V B ) ] , U R = [ 23.4 u ( V B ) 17.5 u ( V B ) 31 u ( V B ) 16 u ( V B ) 31 u ( V B ) 17.7 u ( V B ) 31 u ( V B ) 31 u ( V B ) 22 u ( V B ) 18.6 u ( V B ) 31 u ( V B ) 19.4 u ( V B ) ] . \begin{gathered} U^{L}=\left[\begin{array}{cccc} 20.8 u(\mathrm{VB}) & 7.2 u(\mathrm{VB}) & 8 u(\mathrm{VB}) & 1 u(\mathrm{VB}) \\ 8 u(\mathrm{VB}) & 4 u(\mathrm{VB}) & 6.4 u(\mathrm{VB}) & 1.6 u(\mathrm{VB}) \\ 19.2 u(\mathrm{VB}) & 2 u(\mathrm{VB}) & 4 u(\mathrm{VB}) & 2 u(\mathrm{VB}) \end{array}\right] ,\\ U^{R}=\left[\begin{array}{cccc} 23.4 u(\mathrm{VB}) & 17.5 u(\mathrm{VB}) & 31 u(\mathrm{VB}) & 16 u(\mathrm{VB}) \\ 31 u(\mathrm{VB}) & 17.7 u(\mathrm{VB}) & 31 u(\mathrm{VB}) & 31 u(\mathrm{VB}) \\ 22 u(\mathrm{VB}) & 18.6 u(\mathrm{VB}) & 31 u(\mathrm{VB}) & 19.4 u(\mathrm{VB}) \end{array}\right]. \end{gathered} about U L U^L , According to the chapter 3 Yes
max ⁡ u ‾ \max \underline{u}
s.t.  { 20.8 a 1 + 8 a 2 + 19.2 a 3 ≥ u ‾ 7.2 a 1 + 4 a 2 + 2 a 3 ≥ u ‾ 8 a 1 + 6.4 a 2 + 4 a 3 ≥ u ‾ 1 a 1 + 1.6 a 2 + 2 a 3 ≥ u ‾ a 1 + a 2 + a 3 = 1 a 1 , a 2 , a 3 ≥ 0 \text { s.t. }\left\{\begin{array}{l} 20.8 a_{1}+8 a_{2}+19.2 a_{3} \geq \underline{u} \\ 7.2 a_{1}+4 a_{2}+2 a_{3} \geq \underline{u} \\ 8 a_{1}+6.4 a_{2}+4 a_{3} \geq \underline{u} \\ 1 a_{1}+1.6 a_{2}+2 a_{3} \geq \underline{u} \\ a_{1}+a_{2}+a_{3}=1 \\ a_{1}, a_{2}, a_{3} \geq 0 \end{array}\right.
And min ⁡ v ‾ \min \underline{v}
s.t.  { 20.8 b 1 + 7.2 b 2 + 8 b 3 + b 4 ≤ v ‾ 8 b 1 + 4 b 2 + 6.4 b 3 + 1.6 b 4 ≤ v ‾ 19.2 b 1 + 2 b 2 + 4 b 3 + 2 b 4 ≤ v ‾ b 1 + b 2 + b 3 + b 4 = 1 b 1 , b 2 , b 3 , b 4 ≥ 0 \text { s.t. }\left\{\begin{array}{l} 20.8 b_{1}+7.2 b_{2}+8 b_{3}+b_{4} \leq \underline{v} \\ 8 b_{1}+4 b_{2}+6.4 b_{3}+1.6 b_{4} \leq \underline{v} \\ 19.2 b_{1}+2 b_{2}+4 b_{3}+2 b_{4} \leq \underline{v} \\ b_{1}+b_{2}+b_{3}+b_{4}=1 \\ b_{1}, b_{2}, b_{3}, b_{4} \geq 0 \end{array}\right. We can find that for U L U^L , For optimal solution a ‾ ∗ = ( 0 , 0 , 1 ) \underline{a}^*=(0,0,1) as well as b ‾ ∗ = ( 0 , 0.0618 , 0 , 0.9382 ) \underline{b}^*=(0,0.0618,0,0.9382) .
For the same reason U R U^{R} Yes
max ⁡ u ˉ \max \bar{u}
s.t.  { 23.4 a 1 + 31 a 2 + 22 a 3 ≥ u ˉ 17.5 a 1 + 17.7 a 2 + 18.6 a 3 ≥ 31 a 1 + 31 a 2 + 31 a 3 ≥ u ˉ 16 a 1 + 31 a 2 + 19.4 a 3 ≥ u ˉ a 1 + a 2 + a 3 = 1 a 1 , a 2 , a 3 ≥ 0 \text { s.t. }\left\{\begin{array}{l} 23.4 a_{1}+31 a_{2}+22 a_{3} \geq \bar{u} \\ 17.5 a_{1}+17.7 a_{2}+18.6 a_{3} \geq \\ 31 a_{1}+31 a_{2}+31 a_{3} \geq \bar{u} \\ 16 a_{1}+31 a_{2}+19.4 a_{3} \geq \bar{u} \\ a_{1}+a_{2}+a_{3}=1 \\ a_{1}, a_{2}, a_{3} \geq 0 \end{array}\right.
And min ⁡ v ˉ \min \bar{v}
s.t.  { 23.4 b 1 + 17.5 b 2 + 31 b 3 + 16 b 4 ≤ v ˉ 31 b 1 + 17.7 b 2 + 31 b 3 + 31 b 4 ≤ v ˉ 22 b 1 + 18.6 b 2 + 31 b 3 + 19.4 b 4 ≤ v ˉ b 1 + b 2 + b 3 + b 4 = 1 b 1 , b 2 , b 3 , b 4 ≥ 0 \text { s.t. }\left\{\begin{array}{l} 23.4 b_{1}+17.5 b_{2}+31 b_{3}+16 b_{4} \leq \bar{v} \\ 31 b_{1}+17.7 b_{2}+31 b_{3}+31 b_{4} \leq \bar{v} \\ 22 b_{1}+18.6 b_{2}+31 b_{3}+19.4 b_{4} \leq \bar{v} \\ b_{1}+b_{2}+b_{3}+b_{4}=1 \\ b_{1}, b_{2}, b_{3}, b_{4} \geq 0 \end{array}\right. The optimal solution is a ‾ ∗ = ( 0 , 0 , 1 ) \overline{a}^*=(0,0,1) as well as b ‾ ∗ = ( 0 , 1 , 0 , 0 ) \overline{b}^*=(0,1,0,0) . therefore ,ALPHA The expected return can be calculated by the formula 14 get .
At the equilibrium point ( a ‾ ∗ , b ‾ ∗ ) (\underline{a}^*,\underline{b}^*) Yes ：
M ( B ) = 1. M(B)=1. At the equilibrium point ( a ‾ ∗ , b ‾ ∗ ) (\overline{a}^*,\overline{b}^*) Yes ：
M ( V G ) = 0.6 , M ( G ) = 0.6 , M ( F ) = 0.4 , M ( B ) = 1 , M ( V B ) = 0.6. M(VG)=0.6,M(G)=0.6,M(F)=0.4,M(B)=1,M(VB)=0.6. Final ,MBS The value of the matrix game is [ min ⁡ u ( M a ‾ ∗ b ‾ ∗ ) , max ⁡ u ( M a ‾ ∗ b ‾ ∗ ) ] [\min u(M_{\underline{a}^*\underline{b}^*}),\max u(M_{\overline{a}^*\overline{b}^*})] = [ u ( B ) , 0.6 u ( V G ) + 0.6 u ( G ) + 0.4 u ( F ) , u ( B ) + 0.6 u ( V B ) ] =[u(B),0.6u(VG)+0.6u(G)+0.4u(F),u(B)+0.6u(VB)] .

thank