Paper reading (50):a novel matrix game with payoffs of maximal belt structure

Inge2022-06-23 18:03:48

0 introduce

0.1 subject

IJIS2019: The matrix game of maximum structural return (A novel matrix game with payoffs of maxitive belief structure)

0.2 background

Maximum confidence structure (maxitive belief structures,MBSs) To express imprecision and uncertainty in a specific structure (imprecise and uncertain) Information , So that it can model the nonspecificity and imprecision of probability at the same time (nonspecificity and imprecise).

0.3 Method

The proposed model can be used to describe the interaction between the two sides of the game , An effective method to solve the problem by reaching the equilibrium point is proposed .

0.4 Bib

@article{
Han:2019:690706,
author = {
Yu Zhen Han and Yong Deng},
title = {
A novel matrix game with payoffs of maxitive belief structure},
journal = {
International Journal of Intelligent Systems},
volume = {
34},
number = {
4},
pages = {
690--706},
year = {
2019},
doi = {
10.1002/int.22072}
}

1 Basic knowledge of

1.1 Dempster‐Shafer (D‐S) Confidence theory

To deal with uncertainty , Many mathematical tools have been proposed one after another , for example D Count 、Z Count , And fuzzy sets .D-S Also known as the theory of confidence D-S Theory or belief theory , By Dempster Put forward and be Shafer Expand .D-S Theory can be considered as a generalization of Bayesian probability theory , Because it needs to satisfy weaker constraints than Bayesian theory . Now ,D-S theory Because of its Can handle imprecise and uncertain information The ability of , It has been used in information fusion 、 Multilateral decision making (MCDM)、 pattern recognition , And risk analysis . Here are some basic concepts of this theory :
Definition 1: Discrimination framework (Frame of discernment) Make Ω = { θ 1 , θ 2 , … , θ n } \Omega=\{\theta_1,\theta_2,\dots,\theta_n\} Ω={ θ1,θ2,,θn} Means to contain n n n A finite set of elements , as well as P ( Ω ) P(\Omega) P(Ω) Means to contain 2 N 2^N 2N The power set of elements :
P ( Ω ) = { ∅ , { θ 1 } , { θ 2 } , … , { θ n } , { θ 1 , θ 2 } , { θ 1 , θ 3 } , Ω } . (1) \tag{1} P(\Omega)=\{\emptyset,\{\theta_1\},\{\theta_2\},\dots,\{\theta_n\},\{\theta_1,\theta_2\},\{\theta_1,\theta_3\},\Omega\}. P(Ω)={ ,{ θ1},{ θ2},,{ θn},{ θ1,θ2},{ θ1,θ3},Ω}.(1) Definition 2: Benchmark probability distribution (Basic probability assignment, BPA) Power set P ( Ω ) P(\Omega) P(Ω) To value 0 0 0- 1 1 1 Mapping between :
m : P ( Ω ) → [ 0 , 1 ] , (2) \tag{2} m:P(\Omega)\to[0,1], m:P(Ω)[0,1],(2) It meets the following conditions :
m ( ∅ ) = 0 , ∑ A ⊆ P ( Ω ) m ( A ) = 1. (3) \tag{3} m(\empty)=0,\qquad\sum_{A\subseteq P(\Omega)}m(A)=1. m()=0,AP(Ω)m(A)=1.(3) here m ( A ) m(A) m(A) Indicates that in practical application A The degree of confidence .
Definition 3: Certainty and rationality function (Belief and plausibility functions) Make sure that the function is defined as :
Bel : P ( Ω ) → [ 0 , 1 ]  and Bel ( A ) = ∑ B ⊆ A m ( B ) , (4) \tag{4} \text{Bel}:P(\Omega)\to[0,1]\text{ and Bel}(A)=\sum_{B\subseteq A}m(B), Bel:P(Ω)[0,1] and Bel(A)=BAm(B),(4) Rationality function Pl Is defined as
Pl : P ( Ω ) → [ 0 , 1 ]  and Pl(A) = ∑ B ∩ A ≠ ∅ m ( B ) = 1 − Bel ( A ‾ ) (5) \tag{5} \text{Pl}:P(\Omega)\to[0,1]\text{ and Pl(A)}=\sum_{B\cap A\neq\empty}m(B)=1-\text{Bel}(\overline{A}) Pl:P(Ω)[0,1] and Pl(A)=BA=m(B)=1Bel(A)(5) Bel ( A ) \text{Bel}(A) Bel(A) and Pl ( A ) \text{Pl}(A) Pl(A) Represent the lower bound and the upper bound respectively .
Definition 4:Dempster Associative law (Dempster combination rule) If m ( A ) > 0 m(A)>0 m(A)>0, be A A A Called a focus element (focal element) , The set of all focus elements is called the body of evidence (body of evidence, BOE). When multiple independent BOEs When available , It can be used Dempster The law of association obtains combinatorial evidence (combined evidence):
m ( A ) = ∑ B , C ⊆ Ω , B ∩ C = A m 1 ( B ) m 2 ( C ) 1 − K , (6) \tag{6} m(A)=\frac{\sum\limits_{B,C\subseteq\Omega,B\cap C=A}m_1(B)m_2(C)}{1-K}, m(A)=1KB,CΩ,BC=Am1(B)m2(C),(6) among K = ∑ B ∩ C = ∅ m 1 ( B ) m 2 ( C ) K=\sum_{B\cap C=\emptyset}m_1(B)m_2(C) K=BC=m1(B)m2(C) It's a conflict (conflict) Standardized constants for . The law of association is only if m ⊕ ( ∅ ) ≠ 1 m_\oplus(\empty)\neq1 m()=1 Time makes sense ; Otherwise, the evidence presents a general confrontation , The binding law fails , How to deal with these conflicts will be an open question .

1.2 Maximum confidence structure MBS The definition of

And variables V V V Associated finite space X = { x 1 , x 2 , … , x n } X=\{x_1, x_2,\dots,x_n\} X={ x1,x2,,xn} The maximum confidence structure on M M M from X X X Of q q q A nonempty subset consists of , Called focus element , Expressed as F i , i = 1 , 2 , … , n F_i,i = 1, 2,\dots,n Fi,i=1,2,,n, Its weight meets M ( F i ) ≥ 0 M(F_i)\geq0 M(Fi)0 And max ⁡ i M ( F i ) = 1 \max_iM(F_i)=1 maxiM(Fi)=1. And classic D-S Compared with confidence theory , The sum of the focus elements is not necessarily equal to 1, Therefore, it can better adapt to practical applications .
hypothesis A A A yes X X X A subset of , Two and M M M dependent , They are called upper bound probabilities (upper possibility) And the next probability (lower possibility) Important set functions are defined as follows :
U M ( A ) = max ⁡ i , A ∩ F i ≠ ∅ { M ( F i ) } , L M ( A ) = max ⁡ i , F i ⊆ A { M ( F i ) } . \begin{aligned} U_{M}(A) &=\max _{i, A \cap F_{i} \neq \varnothing}\left\{M\left(F_{i}\right)\right\}, \\ L_{M}(A) &=\max _{i, F_{i} \subseteq A}\left\{M\left(F_{i}\right)\right\}. \end{aligned} UM(A)LM(A)=i,AFi=max{ M(Fi)},=i,FiAmax{ M(Fi)}. Further, it can be expressed as
U M ( A ) = max ⁡ i [ M ( F i ) ∧ G i ( A ) ] , L M ( A ) = max ⁡ i [ M ( F i ) ∧ H i ( A ) ] , \begin{aligned} U_{M}(A) &=\max _{i}\left[M\left(F_{i}\right) \wedge G_{i}(A)\right], \\ L_{M}(A) &=\max _{i}\left[M\left(F_{i}\right) \wedge H_{i}(A)\right], \end{aligned} UM(A)LM(A)=imax[M(Fi)Gi(A)],=imax[M(Fi)Hi(A)], Among them, if A ∩ F i ≠ ∅ A\cap F_i\neq\empty AFi=, G i ( A ) = 1 G_i(A)=1 Gi(A)=1 Or equal to 0 0 0; If F i ⊆ A F_i\subseteq A FiA, be H i ( A ) = 1 H_i(A)=1 Hi(A)=1 Or equal to 0 0 0. Besides , G i ( A ) = max ⁡ i [ F i ( x i ) ∧ A ( x i ) ] G_i(A)=\max_i[F_i(x_i)\wedge A(x_i)] Gi(A)=maxi[Fi(xi)A(xi)], among F i ( x i ) F_i(x_i) Fi(xi) and A ( x i ) A(x_i) A(xi) Is an indicator function . H i ( A ) H_i(A) Hi(A) It can also be rewritten as :
H i ( A ) = 1 − max ⁡ i [ F i ( x i ) ∧ ( 1 − A ( x i ) ) ] . (9) \tag{9} H_i(A)=1-\max_i[F_i(x_i)\wedge(1-A(x_i))]. Hi(A)=1imax[Fi(xi)(1A(xi))].(9) If A ∩ F i ≠ ∅ A\cap F_i\neq\empty AFi=, be G i ( A ) = 1 G_i(A)=1 Gi(A)=1 otherwise = 0 =0 =0; If F i ∈ A F_i\in A FiA, be H i ( A ) = 1 H_i(A)=1 Hi(A)=1 otherwise = 0 =0 =0.
Definition 5:MBS Associative law (Fusing rules of MBS) hypothesis M 1 M_1 M1 and M 2 M_2 M2 There are two variables that provide V V V Two aspects of information MBS, And their respective focus elements are F 1 , F 2 , … , F n F_1,F_2,\dots,F_n F1,F2,,Fn and E 1 , E 2 , … , E q E_1,E_2,\dots,E_q E1,E2,,Eq. Their combination rules can be defined as follows :
M ( F i ∩ E j ) = min ⁡ [ M 1 ( F i ) , M 2 ( E j ) ] k Yes On F i ∩ E j ≠ ∅ . (10) \tag{10} M(F_i\cap E_j)=\frac{\min[M_1(F_i),M_2(E_j)]}{k}\qquad about F_i\cap E_j\neq\empty. M(FiEj)=kmin[M1(Fi),M2(Ej)] Yes On FiEj=.(10) among k = max ⁡ F i ∩ E j ≠ ∅ min ⁡ [ M 1 ( F i ) , M 2 ( E j ) ] k=\max_{F_i\cap E_j\neq\empty}\min[M_1(F_i),M_2(E_j)] k=maxFiEj=min[M1(Fi),M2(Ej)]. This operation can be expressed as M = M 1 ⊥ M 2 M=M_1\perp M_2 M=M1M2, It means maximum integration . Be careful ⊥ \perp Have symmetry , namely M 1 ⊥ M 2 = M 2 ⊥ M 1 M_1\perp M_2=M_2\perp M_1 M1M2=M2M1. k k k Is the conflict coefficient , When k = 0 k=0 k=0 when , Two MBS Always in conflict , So you can't merge them . If there is F ∗ F^* F and E ∗ E^* E bring M 1 ( F ∗ ) ∩ M 2 ( E ∗ ) ≠ ∅ M_1(F^*)\cap M_2(E^*)\neq\empty M1(F)M2(E)= and M 1 ( F ∗ ) = M 2 ( E ∗ ) = 1 M_1(F^*)=M_2(E^*)=1 M1(F)=M2(E)=1, be
M ( F i ∩ E j ) = min ⁡ [ M 1 ( F i ) , M 2 ( E j ) ] Yes On F i ∩ E j ≠ ∅ . (11) \tag{11} M(F_i\cap E_j)=\min[M_1(F_i),M_2(E_j)]\qquad about F_i\cap E_j\neq\empty. M(FiEj)=min[M1(Fi),M2(Ej)] Yes On FiEj=.(11) It can be easily proved k k k Is a discount factor (discounting coefficient) And max ⁡ M ( F ) = 1 \max M(F)=1 maxM(F)=1. further , The formula 11 Can be extended to more MBS The situation of . hypothesis M i , i = 1 , 2 , … , n M_{i}, i=1,2, \dots, n Mi,i=1,2,,n It contains focus elements F i j F_{i j} Fij The multiple MBS. When conditions F 1 j 1 ∩ F 2 j 2 , … , ∩ F n j n ≠ ∅ F_{1 j_{1}} \cap F_{2 j_{2}}, \ldots, \cap F_{n j_{n}} \neq \varnothing F1j1F2j2,,Fnjn= When satisfied , Yes
M ( F 1 j 1 ∩ F 2 j 2 , … , ∩ F n j n ) = min ⁡ [ M 1 ( F 1 j 1 ) , M 2 ( F 2 j 2 ) , … , M n ( F n j n ) ] k , (12) \tag{12} M\left(F_{1 j_{1}} \cap F_{2 j_{2}}, \ldots, \cap F_{n j_{n}}\right)=\frac{\operatorname{min}\left[M_{1}\left(F_{1 j_{1}}\right), M_{2}\left(F_{2 j_{2}}\right), \ldots, M_{n}\left(F_{n j_{n}}\right)\right]}{k}, M(F1j1F2j2,,Fnjn)=kmin[M1(F1j1),M2(F2j2),,Mn(Fnjn)],(12) Among them M i M_{i} Mi Is a pure probability distribution , Yes k = max ⁡ F 1 j ∩ F 2 j , … , ∩ F n j ≠ ∅ min ⁡ [ M 1 ( F 1 j 1 ) , M 2 ( F 2 j 2 ) , … , M n ( F n j n ) ] k=\max _{F_{1 j} \cap F_{2 j}, \ldots, \cap F_{n j} \neq \varnothing} \operatorname{min}\left[M_{1}\left(F_{1 j_{1}}\right), M_{2}\left(F_{2 j_{2}}\right), \ldots, M_{n}\left(F_{n j_{n}}\right)\right] k=maxF1jF2j,,Fnj=min[M1(F1j1),M2(F2j2),,Mn(Fnjn)].

2.3 Traditional matrix game with clear digital returns (Traditional matrix game with crisp number payoffs)

A game usually contains a set of multiple players and a limited set of strategies for each player . Make S A 1 = { s A 1 , s A 2 , … , s A m } S_{A1}=\{s_{A1},s_{A2},\dots,s_{Am}\} SA1={ sA1,sA2,,sAm} and S B 1 = { s B 1 , s B 2 , … , s B n } S_{B1}=\{s_{B1},s_{B2},\dots,s_{Bn}\} SB1={ sB1,sB2,,sBn} It means the player A A A And players B B B A pure set of policies . Make A = { a 1 , a 2 , … , a m } A=\{a_1,a_2,\dots,a_m\} A={ a1,a2,,am} and B = { b 1 , b 2 , … , b n } B=\{b_1,b_2,\dots,b_n\} B={ b1,b2,,bn} It means the player A A A and B B B A mixed strategy . Make X = ( x i j ) m × n X=(x_{ij})_{m\times n} X=(xij)m×n Express A A A Of Payment matrix (payoff matrix) , Then triple ( { A } , { B } , X ) (\{A\},\{B\},X) ({ A},{ B},X) Can represent two players Zero sum matrix game
A X B T = ∑ i = 1 m ∑ j = 1 n a i x i j b j . (13) \tag{13} AXB^T=\sum_{i=1}^m\sum_{j=1}^na_ix_{ij}b_j. AXBT=i=1mj=1naixijbj.(13) In this case , The player B B B The expected loss is − A X B T -AXB^T AXBT.
John von Neumann Proved Min max theorem (minimax theorem) Can handle matrix game problems , So as to find the ideal strategy of both players . Prove the reality , Players should seek to make their own Maximize the minimum expected return A mixed strategy , Or players should give priority to making themselves Expected maximum loss and minimum loss A mixed strategy . This theory can be described as a Linear programming (linear programming) problem :
max ⁡ u \max u maxu
 s.t.  { ∑ i = 1 m x i j a i ≥ u , j = 1 , 2 , … , n , ∑ i = 1 m a i = 1 a i ≥ 0 , i = 1 , 2 , … , m (14) \tag{14} \text { s.t. }\left\{\begin{array}{l} \sum_{i=1}^{m} x_{i j} a_{i} \geq u, \quad j=1,2, \ldots, n, \\ \sum_{i=1}^{m} a_{i}=1 \\ a_{i} \geq 0, \quad i=1,2, \ldots, m \end{array}\right.  s.t. i=1mxijaiu,j=1,2,,n,i=1mai=1ai0,i=1,2,,m(14)
And min ⁡ v \min v minv
 s.t.  { ∑ i = 1 n x i j b i ≤ v , j = 1 , 2 , … , m , ∑ i = 1 n b i = 1 , b i ≥ 0 , i = 1 , 2 , … , n . (15) \tag{15} \text { s.t. }\left\{\begin{array}{l} \sum_{i=1}^{n} x_{i j} b_{i} \leq v, \quad j=1,2, \ldots, m, \\ \sum_{i=1}^{n} b_{i}=1, \\ b_{i} \geq 0, \quad i=1,2, \ldots, n . \end{array}\right.  s.t. i=1nxijbiv,j=1,2,,m,i=1nbi=1,bi0,i=1,2,,n.(15) The formula 14 and 15 It's actually Primal dual linear programming problem , It means u u u The maximum value of is equal to v v v The minimum value of . value Z = u = v Z=u=v Z=u=v Is the solution of the matrix game .
The following uses an example to understand more clearly how to use the min max theorem to solve the matrix game . Make ( S A , S B , X ) (S_A,S_B,X) (SA,SB,X) Indicates the current game , Players A A A There are three strategies , namely S A = { s A 1 , s A 2 , s A 3 } S_A=\{s_{A1},s_{A2},s_{A3}\} SA={ sA1,sA2,sA3}, The player B B B There are four strategies , namely S B = { s B 1 , s B 2 , s B 3 , s B 4 } S_B=\{s_{B1},s_{B2},s_{B3},s_{B4}\} SB={ sB1,sB2,sB3,sB4}. The player A A A The payment matrix is :
A = ( 5 6 − 8 − 5 − 5 8 10 − 4 7 5 − 10 9 ) A=\left( \begin{array}{rrrr} 5&6&-8&-5\\ -5&8&10&-4\\ 7&5&-10&9 \end{array} \right) A=55768581010549 In the zero sum game , The player B B B The payment matrix is − A -A A. Make A = { a 1 , a 2 , a 3 } A=\{a_1,a_2,a_3\} A={ a1,a2,a3} and B = { b 1 , b 2 , b 3 , b 4 } B=\{b_1,b_2,b_3,b_4\} B={ b1,b2,b3,b4} Respectively represent players A A A and B B B A mixed strategy . The balance of the game (equilibrium) Or the result can be calculated by formula 14 and 15 obtain :
max ⁡ u \max u maxu
 s.t.  { 5 a 1 − 5 a 2 + 7 a 3 ≥ u , 6 a 1 + 8 a 2 + 5 a 3 ≥ u , − 8 a 1 + 10 a 2 − 10 a 3 ≥ u , − 5 a 1 − 4 a 2 + 9 a 3 ≥ u , a 1 + a 2 + a 3 = 1 , a 1 , a 2 , a 3 ≥ 0 \text { s.t. }\left\{\begin{array}{l} 5a_1-5a_2+7a_3\geq u,\\ 6a_1+8a_2+5a_3\geq u,\\ -8a_1+10a_2-10a_3\geq u,\\ -5a_1-4a_2+9a_3\geq u,\\ a_1+a_2+a_3=1,\\ a_1,a_2,a_3\geq0 \end{array}\right.  s.t. 5a15a2+7a3u,6a1+8a2+5a3u,8a1+10a210a3u,5a14a2+9a3u,a1+a2+a3=1,a1,a2,a30 And min ⁡ v \min v minv
 s.t.  { 5 b 1 + 6 b 2 − 8 b 3 − 5 b 4 ≤ v − 5 b 1 + 8 b 2 + 10 b 3 − 4 b 4 ≤ v 7 b 1 + 5 b 2 − 10 b 3 + 9 b 4 ≤ v b 1 + b 2 + b 3 + b 4 = 1 b 1 , b 2 , b 3 , b 4 ≥ 0 (16) \tag{16} \text { s.t. }\left\{\begin{array}{l} 5 b_{1}+6 b_{2}-8 b_{3}-5 b_{4} \leq v \\ -5 b_{1}+8 b_{2}+10 b_{3}-4 b_{4} \leq v \\ 7 b_{1}+5 b_{2}-10 b_{3}+9 b_{4} \leq v \\ b_{1}+b_{2}+b_{3}+b_{4}=1 \\ b_{1}, b_{2}, b_{3}, b_{4} \geq 0 \end{array}\right.  s.t. 5b1+6b28b35b4v5b1+8b2+10b34b4v7b1+5b210b3+9b4vb1+b2+b3+b4=1b1,b2,b3,b40(16) Through optimization , The balance will be in A ∗ = ( 0 , 0.5312 , 0.4687 ) A^*=(0,0.5312,0.4687) A=(0,0.5312,0.4687) and B ∗ = ( 0.625 , 0 , 0.3750 , 0 ) B^*=(0.625,0,0.3750,0) B=(0.625,0,0.3750,0) achieve . At this time there is u = v = 0.6250 u=v=0.6250 u=v=0.6250.

3 be based on MBS Matrix game

This section proposes a method with MBS Matrix game of returns , It also considers disharmony (discord) And nonspecific (nonspecificity). The following first gives some definitions :
Definition 6: The player A A A Preference strategy A A A, The player B B B Preference strategy B B B, It can be abbreviated as :
MG ⁡ MBS = ( A , B , X ) , \operatorname{MG}_\text{MBS}=(A,B,X), MGMBS=(A,B,X), among X = ( M i j ) m × n X=(M_{ij})_{m\times n} X=(Mij)m×n It's a player A Of MBS Payment matrix . therefore X X X Each element of M i j M_{ij} Mij It's all one MBS, It is affected by the discrimination framework Ω = { θ 1 , θ 2 , … , θ n } \Omega=\{\theta_1,\theta_2,\dots,\theta_n\} Ω={ θ1,θ2,,θn} Constraints .
According to the definition 6, M i j M_{ij} Mij It means the player A A A Adopt pure strategy S A i S_{Ai} SAi And players B B B Adopt pure strategy S B j S_{Bj} SBj Gain at (gain). The gain is determined by Ω \Omega Ω Upper MBS Express . For the non specificity and disharmony contained in the game matrix , Rating θ 1 , θ 2 , θ n \theta_1,\theta_2,\theta_n θ1,θ2,θn It can be a linear language variable (linear linguistic variables), So as to include uncertainty in the game . Besides , In the case of hybrid strategy , For example, players A A A Prefer mixed strategies A A A, The player B B B Prefer mixed strategies B B B, Then the player A A A The expected gain can be recorded as M x y M_{xy} Mxy, And it can be through matrix game MG MBS \text{MG}_\text{MBS} MGMBS The following equation in calculates :
M A B ( A ) = ∑ i = 1 m ∑ j = 1 n a i b j M i j ( A ) , A ∈ Ω . (17) \tag{17} M_{AB}(A)=\sum_{i=1}^m\sum_{j=1}^na_ib_jM_{ij}(A),\qquad A\in\Omega. MAB(A)=i=1mj=1naibjMij(A),AΩ.(17) Once in a zero sum game MBS The structural payment matrix is called the matrix , Then you can calculate the balance through the following steps .

3.1 Stage 1: Determine the relative effects of the discrimination framework

stay MG MBS \text{MG}_\text{MBS} MGMBS in , Payment matrix passed MBS Express . However MBS But it can't be quantified . In the game , The increase or decrease of utility is measured by the gains and losses of players in matrix game . In this case , We adjust by quantifying changes in utility MBS. at present Ω \Omega Ω The relative correspondence between two elements has been defined , In order to simplify the form of utility function , One Utility chain function (chain utility function) It's defined as :
u ( θ i ) = f i , i − 1 ( u ( θ i − 1 ) ) , 2 ≤ i ≤ n , (18) \tag{18} u(\theta_i)=f_{i,i-1}(u(\theta_{i-1})),\qquad2\leq i\leq n, u(θi)=fi,i1(u(θi1)),2in,(18) among u ( θ i ) u(\theta_i) u(θi) yes θ i \theta_i θi The utility of . According to the formula 9, If Ω \Omega Ω Its size is n n n, be n − 1 n-1 n1 Item should be considered . Based on this , The relative utility of any two elements can be obtained .

3.2 step 2: Eliminate nonspecificity and disharmony

stay MBS in , Non specificity and disharmony should be taken into account , Then it's difficult to deal with them at the same time . Regarding this , We eliminate the uncertainty of payment matrix based on ideal and extreme cases .MBS The utility of can be expressed as u ( M i j ) = [ u i j L , u i j R ] (19) \tag{19} u\left(M_{i j}\right)=\left[u_{i j}^{L}, u_{i j}^{R}\right] u(Mij)=[uijL,uijR](19) among
{ u i j L = ∑ θ ∈ Ω U M ( θ ) u ( θ ) = ∑ θ ∈ Ω max ⁡ i , θ ∩ F i ≠ ∅ { M ( F i ) } u ( θ ) , u i j R = ∑ θ ∈ Ω L M ( θ ) u ( θ ) = ∑ θ ∈ Ω max ⁡ i , F ∈ Θ ∈ { M ( F i ) } u ( θ ) . (20) \tag{20} \left\{\begin{array}{l} u_{i j}^{L}=\sum_{\theta \in \Omega} U_{M}(\theta) u(\theta)=\sum_{\theta \in \Omega} \max _{i, \theta \cap F_{i} \neq \varnothing}\left\{M\left(F_{i}\right)\right\} u(\theta), \\ u_{i j}^{R}=\sum_{\theta \in \Omega} L_{M}(\theta) u(\theta)=\sum_{\theta \in \Omega} \max _{i, F \in \Theta \in}\left\{M\left(F_{i}\right)\right\} u(\theta) . \end{array}\right. { uijL=θΩUM(θ)u(θ)=θΩmaxi,θFi={ M(Fi)}u(θ),uijR=θΩLM(θ)u(θ)=θΩmaxi,FΘ{ M(Fi)}u(θ).(20) Intermediate value payment matrix (interval-valued payoff matrix) It can be calculated as :
U = u ( M ) = ( [ u i j L , u i j R ] ) m × n . (21) \tag{21} U=u(M)=\left(\left[u_{i j}^{L}, u_{i j}^{R}\right]\right)_{m \times n} . U=u(M)=([uijL,uijR])m×n.(21)

3.3 step 3: Calculate the equilibrium point

Some existing methods can be used to deal with MG MBS \text{MG}_\text{MBS} MGMBS The transformed intermediate value matrix game :
max ⁡ u ‾ \max \overline{u} maxu
 s.t.  { ∑ i = 1 m u i j a i ≥ u ˉ , j = 1 , 2 , … , n , u i j L ≤ u i j ≤ u i j R ∑ i = 1 m a i = 1 , a i ≥ 0 , i = 1 , 2 , … , m (22) \tag{22} \text { s.t. }\left\{\begin{array}{l} \sum_{i=1}^{m} u_{i j} a_{i} \geq \bar{u}, \quad j=1,2, \ldots, n, \\ u_{i j}^{L} \leq u_{i j} \leq u_{i j}^{R} \\ \sum_{i=1}^{m} \quad a_{i}=1, \\ a_{i} \geq 0, \quad i=1,2, \ldots, m \end{array}\right.  s.t. i=1muijaiuˉ,j=1,2,,n,uijLuijuijRi=1mai=1,ai0,i=1,2,,m(22)
And min ⁡ v ‾ \min \underline{v} minv
 s.t.  { ∑ i = 1 n u i j b i ≤ v ‾ , j = 1 , 2 , … , m u i j L ≤ u i j ≤ u i j R ∑ i = 1 n b i = 1 b i ≥ 0 , i = 1 , 2 , … , n . (23) \tag{23} \text { s.t. }\left\{\begin{array}{l} \sum_{i=1}^{n} u_{i j} b_{i} \leq \underline{v}, \quad j=1,2, \ldots, m \\ u_{i j}^{L} \leq u_{i j} \leq u_{i j}^{R} \\ \sum_{i=1}^{n} b_{i}=1 \\ b_{i} \geq 0, \quad i=1,2, \ldots, n . \end{array}\right.  s.t. i=1nuijbiv,j=1,2,,muijLuijuijRi=1nbi=1bi0,i=1,2,,n.(23)
The above formula can be converted to the following Two linear programming
max ⁡ u ˉ \max \bar{u} maxuˉ
 s.t.  { ∑ i = 1 m u i j R a i ≥ u ˉ , j = 1 , 2 , … , n ∑ i = 1 m a i = 1 a i ≥ 0 , i = 1 , 2 , … , m (24) \tag{24} \text { s.t. }\left\{\begin{array}{l} \sum_{i=1}^{m} u_{i j}^{R} a_{i} \geq \bar{u}, \quad j=1,2, \ldots, n \\ \sum_{i=1}^{m} a_{i}=1 \\ a_{i} \geq 0, \quad i=1,2, \ldots, m \end{array}\right.  s.t. i=1muijRaiuˉ,j=1,2,,ni=1mai=1ai0,i=1,2,,m(24)
And min ⁡ v ‾ \min \underline{v} minv
 s.t.  { ∑ i = 1 n u i j L b i ≤ v ‾ , j = 1 , 2 , … , m ∑ i = 1 n b i = 1 b i ≥ 0 , i = 1 , 2 , … , n . (25) \tag{25} \text { s.t. } \begin{cases}\sum_{i=1}^{n} & u_{i j}^{L} b_{i} \leq \underline{v}, \quad j=1,2, \ldots, m \\ \sum_{i=1}^{n} & b_{i}=1 \\ b_{i} \geq 0, & i=1,2, \ldots, n .\end{cases}  s.t. i=1ni=1nbi0,uijLbiv,j=1,2,,mbi=1i=1,2,,n.(25) according to Duality theory of linear programming , The above formula can be further converted to :
min ⁡ v ˉ \min \bar{v} minvˉ
 s.t.  { ∑ i = 1 m u i j R b i ≤ v ˉ , j = 1 , 2 , … , m , ∑ i = 1 m b i = 1 , a i ≥ 0 , i = 1 , 2 , … , n (26) \tag{26} \text { s.t. }\left\{\begin{array}{l} \sum_{i=1}^{m} \quad u_{i j}^{R} b_{i} \leq \bar{v}, \quad j=1,2, \ldots, m, \\ \sum_{i=1}^{m} \quad b_{i}=1, \\ a_{i} \geq 0, \quad i=1,2, \ldots, n \end{array}\right.  s.t. i=1muijRbivˉ,j=1,2,,m,i=1mbi=1,ai0,i=1,2,,n(26)
And max ⁡ u ‾ \max \underline{u} maxu
 s.t.  { ∑ i = 1 n u i j L b i ≥ u ‾ , j = 1 , 2 , … , n , ∑ i = 1 n a i = 1 , b i ≥ 0 , i = 1 , 2 , … , m . (27) \tag{27} \text { s.t. } \begin{cases}\sum_{i=1}^{n} & u_{i j}^{L} b_{i} \geq \underline{u}, \quad j=1,2, \ldots, n, \\ \sum_{i=1}^{n} & a_{i}=1, \\ b_{i} \geq 0, & i=1,2, \ldots, m .\end{cases}  s.t. i=1ni=1nbi0,uijLbiu,j=1,2,,n,ai=1,i=1,2,,m.(27) By solving the linear programming problem , Can achieve balance ( a ‾ ∗ , b ‾ ∗ ) (\overline{a}^*,\overline{b}^*) (a,b) and ( a ‾ ∗ , b ‾ ∗ ) (\underline{a}^*,\underline{b}^*) (a,b), They correspond to the formula 24、26、27, as well as 15 Solution .

3.4 solve MG MBS \text{MG}_\text{MBS} MGMBS Value

In this section MG MBS \text{MG}_\text{MBS} MGMBS Formally define the value on :
Definition 7: Make MG MBS = ( A , B , X ) \text{MG}_\text{MBS}=(A,B,X) MGMBS=(A,B,X) Represents a matrix game , ( a ‾ ∗ , b ‾ ∗ ) (\overline{a}^*,\overline{b}^*) (a,b) and ( a ‾ ∗ , b ‾ ∗ ) (\underline{a}^*,\underline{b}^*) (a,b) Is the equilibrium point of the matrix , be MG MBS \text{MG}_\text{MBS} MGMBS The value in is V = [ V L , V R ] V=[V^L,V^R] V=[VL,VR], Its satisfaction
V L = u a ‾ ∗ b ‾ ∗ = ∑ θ ∈ Ω U M ( θ ) u ( θ ) = ∑ θ ∈ Ω max ⁡ i , θ ∩ F ≠ ∅ { M ( F i ) } u ( θ ) , V R = u a ~ ∗ b ˙ ∗ = ∑ θ ∈ Ω L M ( θ ) u ( θ ) = ∑ θ ∈ Ω max ⁡ i , F i ⊆ θ { M ( F i ) } u ( θ ) . (28) \tag{28} \begin{aligned} V^{L}=u_{\underline{a}^{*} \underline{b}^{*}} &=\sum_{\theta \in \Omega} U_{M}(\theta) u(\theta)=\sum_{\theta \in \Omega} \max _{i, \theta \cap F \neq \varnothing}\left\{M\left(F_{i}\right)\right\} u(\theta), \\ V^{R}=u_{\tilde{a}^{*} \dot{b}^{*}} &=\sum_{\theta \in \Omega} L_{M}(\theta) u(\theta)=\sum_{\theta \in \Omega} \max _{i, F_{i} \subseteq \theta}\left\{M\left(F_{i}\right)\right\} u(\theta) . \end{aligned} VL=uabVR=ua~b˙=θΩUM(θ)u(θ)=θΩi,θF=max{ M(Fi)}u(θ),=θΩLM(θ)u(θ)=θΩi,Fiθmax{ M(Fi)}u(θ).(28) Final , The player A A A Expected balance income or player B B B The expected balance loss can be calculated by MG MBS \text{MG}_\text{MBS} MGMBS present , Its value is MBS Disharmony and non specificity . Besides , When M a ‾ ∗ b ‾ ∗ = M_{\overline{a}^*\overline{b}^*}= Mab= M a ‾ ∗ b ‾ ∗ M_{\underline{a}^*\underline{b}^*} Mab = M a ∗ b ∗ =M_{a^*b^*} =Mab, MG MBS \text{MG}_\text{MBS} MGMBS The value of can be expressed as V = M a ∗ b ∗ V=M_{a^*b^*} V=Mab.

4 Event analysis

This section adopts Territorial disputes To verify that the proposed method has MBS The effectiveness and efficiency of the income matrix game . Territorial sovereignty is indivisible , It can be said to be a zero sum game . Pictured 1 Shown , be known as ALPHA and BETA The two countries compete for territory at the same time Zeta. Because the two countries are on the brink of war , The United Nations sent a team of experts to evaluate some feasible policies to ease the pressure on the two countries .

chart 1: Territorial disputes

4.1 Game Analysis

This step is through analysis ALPHA and BETA The historical decisions and actions of the two countries to assess the potential strategies they may adopt . In this case , hypothesis ALPHA There are three possible strategies and BETA There are four possible strategies .
ALPHA The strategy is as follows :
1) S A 1 S_{A1} SA1: And BETA Negotiate and make some compromises ;
2) S A 2 S_{A2} SA2: Allocate more troops to Zeta;
3) S A 3 S_{A3} SA3: Take some negative strategies .
BETA The strategy is as follows :
1) The war ;
2) Local conflicts ;
3) Welfare policy
4) Military confrontation .

4.2 Decision analysis

Due to the imprecision and uncertainty in the changeable environment , The return under different strategy combinations cannot be determined as an accurate number . therefore ,MBS Used to express Uncertain payment . To simply represent the assessments collected from experts , Evaluation level First use the discrimination framework Ω \Omega Ω To make sure . In particular , The value is uncertain information Language variables Constitute a discrimination framework :
Ω = { Very Bad  ( V B ) , Bad  ( B ) , Fair  ( F ) , Good  ( G ) , Very Good  ( V G ) } . \Omega=\{\text{Very Bad }(VB),\text{Bad }(B),\text{Fair }(F),\text{Good }(G), \text{Very Good }(VG)\}. Ω={ Very Bad (VB),Bad (B),Fair (F),Good (G),Very Good (VG)}. Once the language variables in the framework are identified , The relative utility between any pair of language variables , To limit them to uniform standards . In this case , in consideration of Ω \Omega Ω The size is 5 5 5, We set it up 10 10 10 Utility relations , That is, a set of relative utility functions is fixed to describe the utility relationship between language variables , As shown below :
U ( V G ) / U ( G ) = 2 , U ( G ) / U ( F ) = 2 , U ( F ) / U ( B ) = 2 , U ( B ) / U ( V B ) = 2. (29) \tag{29} U(VG)/U(G)=2, U(G)/U(F)=2,U(F)/U(B)=2,U(B)/U(VB)=2. U(VG)/U(G)=2,U(G)/U(F)=2,U(F)/U(B)=2,U(B)/U(VB)=2.(29)

4.3 Payment evaluation

In the game Payment matrix It can be built according to language evaluation and player strategy . Because the game is a zero one game , have only ALPHA perhaps BETA The payment matrix of is given . surface 1 The show is ALPHA Payment matrix , It should be noted that each assessment is represented as MBS Structure , To express nonspecificity and disharmony .

surface 1:ALPHA Payment matrix

4.4 solve MBS Matrix game

First , Through the formula 20 It can eliminate nonspecificity and disharmony ,MBS A matrix game can be transformed into an intermediate value payoff matrix , As shown in the table 2.

surface 2:ALPHA Intermediate value payment matrix

then , Based on the formula 29, We will watch 2 Convert to include only u ( V B ) u(VB) u(VB) In the form of , As shown in the table 3.

surface 3:ALPHA Contains only VB Payment matrix

also , surface 3 The payment matrix in can be transformed into two clear payment matrices U L U^L UL and U R U^R UR
U L = [ 20.8 u ( V B ) 7.2 u ( V B ) 8 u ( V B ) 1 u ( V B ) 8 u ( V B ) 4 u ( V B ) 6.4 u ( V B ) 1.6 u ( V B ) 19.2 u ( V B ) 2 u ( V B ) 4 u ( V B ) 2 u ( V B ) ] , U R = [ 23.4 u ( V B ) 17.5 u ( V B ) 31 u ( V B ) 16 u ( V B ) 31 u ( V B ) 17.7 u ( V B ) 31 u ( V B ) 31 u ( V B ) 22 u ( V B ) 18.6 u ( V B ) 31 u ( V B ) 19.4 u ( V B ) ] . \begin{gathered} U^{L}=\left[\begin{array}{cccc} 20.8 u(\mathrm{VB}) & 7.2 u(\mathrm{VB}) & 8 u(\mathrm{VB}) & 1 u(\mathrm{VB}) \\ 8 u(\mathrm{VB}) & 4 u(\mathrm{VB}) & 6.4 u(\mathrm{VB}) & 1.6 u(\mathrm{VB}) \\ 19.2 u(\mathrm{VB}) & 2 u(\mathrm{VB}) & 4 u(\mathrm{VB}) & 2 u(\mathrm{VB}) \end{array}\right] ,\\ U^{R}=\left[\begin{array}{cccc} 23.4 u(\mathrm{VB}) & 17.5 u(\mathrm{VB}) & 31 u(\mathrm{VB}) & 16 u(\mathrm{VB}) \\ 31 u(\mathrm{VB}) & 17.7 u(\mathrm{VB}) & 31 u(\mathrm{VB}) & 31 u(\mathrm{VB}) \\ 22 u(\mathrm{VB}) & 18.6 u(\mathrm{VB}) & 31 u(\mathrm{VB}) & 19.4 u(\mathrm{VB}) \end{array}\right]. \end{gathered} UL=20.8u(VB)8u(VB)19.2u(VB)7.2u(VB)4u(VB)2u(VB)8u(VB)6.4u(VB)4u(VB)1u(VB)1.6u(VB)2u(VB),UR=23.4u(VB)31u(VB)22u(VB)17.5u(VB)17.7u(VB)18.6u(VB)31u(VB)31u(VB)31u(VB)16u(VB)31u(VB)19.4u(VB). about U L U^L UL, According to the chapter 3 Yes
max ⁡ u ‾ \max \underline{u} maxu
 s.t.  { 20.8 a 1 + 8 a 2 + 19.2 a 3 ≥ u ‾ 7.2 a 1 + 4 a 2 + 2 a 3 ≥ u ‾ 8 a 1 + 6.4 a 2 + 4 a 3 ≥ u ‾ 1 a 1 + 1.6 a 2 + 2 a 3 ≥ u ‾ a 1 + a 2 + a 3 = 1 a 1 , a 2 , a 3 ≥ 0 \text { s.t. }\left\{\begin{array}{l} 20.8 a_{1}+8 a_{2}+19.2 a_{3} \geq \underline{u} \\ 7.2 a_{1}+4 a_{2}+2 a_{3} \geq \underline{u} \\ 8 a_{1}+6.4 a_{2}+4 a_{3} \geq \underline{u} \\ 1 a_{1}+1.6 a_{2}+2 a_{3} \geq \underline{u} \\ a_{1}+a_{2}+a_{3}=1 \\ a_{1}, a_{2}, a_{3} \geq 0 \end{array}\right.  s.t. 20.8a1+8a2+19.2a3u7.2a1+4a2+2a3u8a1+6.4a2+4a3u1a1+1.6a2+2a3ua1+a2+a3=1a1,a2,a30
And min ⁡ v ‾ \min \underline{v} minv
 s.t.  { 20.8 b 1 + 7.2 b 2 + 8 b 3 + b 4 ≤ v ‾ 8 b 1 + 4 b 2 + 6.4 b 3 + 1.6 b 4 ≤ v ‾ 19.2 b 1 + 2 b 2 + 4 b 3 + 2 b 4 ≤ v ‾ b 1 + b 2 + b 3 + b 4 = 1 b 1 , b 2 , b 3 , b 4 ≥ 0 \text { s.t. }\left\{\begin{array}{l} 20.8 b_{1}+7.2 b_{2}+8 b_{3}+b_{4} \leq \underline{v} \\ 8 b_{1}+4 b_{2}+6.4 b_{3}+1.6 b_{4} \leq \underline{v} \\ 19.2 b_{1}+2 b_{2}+4 b_{3}+2 b_{4} \leq \underline{v} \\ b_{1}+b_{2}+b_{3}+b_{4}=1 \\ b_{1}, b_{2}, b_{3}, b_{4} \geq 0 \end{array}\right.  s.t. 20.8b1+7.2b2+8b3+b4v8b1+4b2+6.4b3+1.6b4v19.2b1+2b2+4b3+2b4vb1+b2+b3+b4=1b1,b2,b3,b40 We can find that for U L U^L UL, For optimal solution a ‾ ∗ = ( 0 , 0 , 1 ) \underline{a}^*=(0,0,1) a=(0,0,1) as well as b ‾ ∗ = ( 0 , 0.0618 , 0 , 0.9382 ) \underline{b}^*=(0,0.0618,0,0.9382) b=(0,0.0618,0,0.9382).
For the same reason U R U^{R} UR Yes
max ⁡ u ˉ \max \bar{u} maxuˉ
 s.t.  { 23.4 a 1 + 31 a 2 + 22 a 3 ≥ u ˉ 17.5 a 1 + 17.7 a 2 + 18.6 a 3 ≥ 31 a 1 + 31 a 2 + 31 a 3 ≥ u ˉ 16 a 1 + 31 a 2 + 19.4 a 3 ≥ u ˉ a 1 + a 2 + a 3 = 1 a 1 , a 2 , a 3 ≥ 0 \text { s.t. }\left\{\begin{array}{l} 23.4 a_{1}+31 a_{2}+22 a_{3} \geq \bar{u} \\ 17.5 a_{1}+17.7 a_{2}+18.6 a_{3} \geq \\ 31 a_{1}+31 a_{2}+31 a_{3} \geq \bar{u} \\ 16 a_{1}+31 a_{2}+19.4 a_{3} \geq \bar{u} \\ a_{1}+a_{2}+a_{3}=1 \\ a_{1}, a_{2}, a_{3} \geq 0 \end{array}\right.  s.t. 23.4a1+31a2+22a3uˉ17.5a1+17.7a2+18.6a331a1+31a2+31a3uˉ16a1+31a2+19.4a3uˉa1+a2+a3=1a1,a2,a30
And min ⁡ v ˉ \min \bar{v} minvˉ
 s.t.  { 23.4 b 1 + 17.5 b 2 + 31 b 3 + 16 b 4 ≤ v ˉ 31 b 1 + 17.7 b 2 + 31 b 3 + 31 b 4 ≤ v ˉ 22 b 1 + 18.6 b 2 + 31 b 3 + 19.4 b 4 ≤ v ˉ b 1 + b 2 + b 3 + b 4 = 1 b 1 , b 2 , b 3 , b 4 ≥ 0 \text { s.t. }\left\{\begin{array}{l} 23.4 b_{1}+17.5 b_{2}+31 b_{3}+16 b_{4} \leq \bar{v} \\ 31 b_{1}+17.7 b_{2}+31 b_{3}+31 b_{4} \leq \bar{v} \\ 22 b_{1}+18.6 b_{2}+31 b_{3}+19.4 b_{4} \leq \bar{v} \\ b_{1}+b_{2}+b_{3}+b_{4}=1 \\ b_{1}, b_{2}, b_{3}, b_{4} \geq 0 \end{array}\right.  s.t. 23.4b1+17.5b2+31b3+16b4vˉ31b1+17.7b2+31b3+31b4vˉ22b1+18.6b2+31b3+19.4b4vˉb1+b2+b3+b4=1b1,b2,b3,b40 The optimal solution is a ‾ ∗ = ( 0 , 0 , 1 ) \overline{a}^*=(0,0,1) a=(0,0,1) as well as b ‾ ∗ = ( 0 , 1 , 0 , 0 ) \overline{b}^*=(0,1,0,0) b=(0,1,0,0). therefore ,ALPHA The expected return can be calculated by the formula 14 get .
At the equilibrium point ( a ‾ ∗ , b ‾ ∗ ) (\underline{a}^*,\underline{b}^*) (a,b) Yes :
M ( B ) = 1. M(B)=1. M(B)=1. At the equilibrium point ( a ‾ ∗ , b ‾ ∗ ) (\overline{a}^*,\overline{b}^*) (a,b) Yes :
M ( V G ) = 0.6 , M ( G ) = 0.6 , M ( F ) = 0.4 , M ( B ) = 1 , M ( V B ) = 0.6. M(VG)=0.6,M(G)=0.6,M(F)=0.4,M(B)=1,M(VB)=0.6. M(VG)=0.6,M(G)=0.6,M(F)=0.4,M(B)=1,M(VB)=0.6. Final ,MBS The value of the matrix game is [ min ⁡ u ( M a ‾ ∗ b ‾ ∗ ) , max ⁡ u ( M a ‾ ∗ b ‾ ∗ ) ] [\min u(M_{\underline{a}^*\underline{b}^*}),\max u(M_{\overline{a}^*\overline{b}^*})] [minu(Mab),maxu(Mab)] = [ u ( B ) , 0.6 u ( V G ) + 0.6 u ( G ) + 0.4 u ( F ) , u ( B ) + 0.6 u ( V B ) ] =[u(B),0.6u(VG)+0.6u(G)+0.4u(F),u(B)+0.6u(VB)] =[u(B),0.6u(VG)+0.6u(G)+0.4u(F),u(B)+0.6u(VB)].


thank
Similar articles

2022-01-05

2022-03-19