ebook img

A Theoretical Perspective of Solving Phaseless Compressed Sensing via Its Nonconvex Relaxation PDF

0.33 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview A Theoretical Perspective of Solving Phaseless Compressed Sensing via Its Nonconvex Relaxation

A Theoretical Perspective of Solving Phaseless 7 Compressed Sensing via Its Nonconvex Relaxation∗ 1 0 2 b e Guowei You† Zheng-Hai Huang‡ You Wang§ F 1 ] C O Abstract . h t a m As a natural extension of compressive sensing and the requirement of some [ practical problems, Phaseless CompressedSensing(PCS)hasbeen introducedand studiedrecently. Manytheoretical resultshavebeenobtainedforPCSwiththeaid 1 v ofitsconvexrelaxation. Motivated bysuccessfulapplications ofnonconvexrelaxed 0 methods for solving compressive sensing, in this paper, we try to investigate PCS 1 via its nonconvex relaxation. Specifically, we relax PCS in the real context by 1 0 the corresponding ℓp-minimization with p ∈ (0,1). We show that there exists a 0 constant p∗ ∈ (0,1] such that for any fixed p ∈ (0,p∗), every optimal solution to . 2 the ℓ -minimization also solves the concerned problem; and derive an expression p 0 of such a constant p∗ by making use of the known data and the sparsity level of 7 1 the concerned problem. These provide a theoretical basis for solving this class of : problems via the corresponding ℓ -minimization. v p i X Key words: Phase retrieval, phaseless compressed sensing, compressed sensing, r ℓ -minimization. a p ∗This work was partially supported by the National Natural Science Foundation of China (Grant No. 11431002). †Department of Mathematics, School of Science, Tianjin University, Tianjin 300072 P.R. China; andDepartmentof Mathematics andStatistics, HenanUniversity ofScience andTechnology,Luoyang, 471023 P.R. China. Email: [email protected] ‡CorrespondingAuthor. DepartmentofMathematics,SchoolofScience, TianjinUniversity,Tianjin 300072,P.R.China. ThisauthorisalsowiththeCenterforAppliedMathematicsofTianjinUniversity. Email: [email protected]. Tel:+86-22-27403615Fax:+86-22-27403615 §Department of Mathematics, School of Science, Tianjin University, Tianjin 300072, P.R. China. Email: wang [email protected]. 1 1 Introduction In the past decade, Compressive Sensing (CS) has gained intensive attention, see [5,6,10,16,24] and references therein. It aims at recovering a sparsest vector from an underdetermined system of linear equations. In other words, CS is to solve the following ℓ -minimization: 0 minkxk s.t. Ax = b, (1.1) 0 where A ∈ Rm×n with m ≪ n and kxk denotes the number of nonzero elements of 0 x ∈ Rn. Unfortunately, problem(1.1)isNP-hard[19]. Todeal with(1.1), manymethods have been developed, and among which the ℓ -minimization approach is well-known. To 1 fill the gap between the ℓ -minimization and ℓ -minimization, many authors studied the 0 1 ℓ -minimization (see, for example, [7,9,11,14,17,20,22,23,26,28,29]): p minkxkp s.t. Ax = b, (1.2) p where p ∈ (0,1) and kxk = ( |x |p)1/p is the Schatten-p quasi-norm of x. It has p i i been shown that it needs fewer measurements with small p for exact recovery via the P ℓ -minimization than the ℓ -minimization. Recently, Peng, Yue and Li [20] showed that p 1 there exists a constant p∗ > 0 such that for any fixed p ∈ (0,p∗), every optimal solution totheℓ -minimizationalsosolves thecorresponding ℓ -minimization. Such animportant p 0 property was also studied for some related problems (see, for example, [8,12,18]). In recent years, Phase Retrieval (PR) has been paid wide attention ( [1–4]). Math- ematically, PR refers to recovering a vector x ∈ Cn (or Rn) from a set of phase- 0 less measurements {b = |hφ ,x i|,j = 1,2,...,m}, where φ ∈ Cn (or Rn) for any j j 0 j j = 1,2,...,m. PR has been applied to X-ray imaging, crystallography, electron mi- croscopy and so on. In many applications, the vectors to be recovered are often sparse in certainbasis and, inparticular, this occursinsomeregimes ofX-raycrystallography [27]. The corresponding model is called Phaseless Compressed Sensing (PCS) [25]. Recently, PCS in the real context has been studied ( [13,25,27]) whose model is given by (P ) minkxk s.t. |Φx| = b, 0 0 where Φ = (φ ,...,φ )⊤ ∈ Rm×n with full row rank and m ≪ n, b ∈ Rm and x ∈ Rn. 1 m + Hereafter, the symbol | · | denotes the component-wise absolute value of a vector, i.e., |u| = (|u |,...,|u |)⊤ for u = (u ,...,u )⊤ ∈ Rm, and the superscript ⊤ represents 1 m 1 m transposition. As was done in the case of CS, problem (P ) can be relaxed by 0 (P ) minkxkp s.t. |Φx| = b where p ∈ (0,1]. p p When p = 1, problem (P ) is called as the ℓ -minimization for PCS. The exact recovery p 1 conditions for problem (P ) by using the ℓ -minimization have been studied (see, for 0 1 2 example, [13,25,27]), including Strong Restricted Isometry Property and Null Space Property. However, it seems that problem (P ) with p ∈ (0,1) has not been studied so p far. An interesting question is whether or not there exists a constant p∗ ∈ (0,1] such that for any fixed p ∈ (0,p∗), every optimal solution to problem (P ) also solves problem (P ). p 0 In this paper, we answer this question. That is, without any additional assumption, we show that there exists a constant p∗ ∈ (0,1] such that every optimal solution to problem (P ) solves problem (P ) whenever p ∈ (0,p∗), and derive an expression of p∗ by making p 0 use of matrix Φ, vector b and the sparsity level of the concerned problem. The rest of the paper is organized as follows. In Section 2, we give some basic concepts and results which will be used in later sections. In Section 3, three subsectopns areincluded. Specifically, inSubsection 3.1, wediscuss thefiniteness ofoptimalsolutions of problem (P ); and furthermore, bound all optimal solutions of problem (P ) by a box 0 0 set; in Subsection 3.2, we bound the optimal solution set of problem (P ) by a box set; p and in Subsection 3.3, we give reformulations of problems (P ) and (P ), respectively. 0 p In Section 4, we show that there exists a constant p∗ ∈ (0,1] such that problem (P ) 0 is equivalent to problem (P ) whenever p ∈ (0,p∗); and derive an expression of p∗ by p making use of matrix Φ, vector b and the sparsity level of the concerned problem. In Section 5, we give two examples to illustrate our theoretical findings. Some conclusions and comments are given in the last section. Conventions on some notations in this paper. We denote a matrix by a boldface uppercase letter; a vector by a boldfacelowercase letter anda real number by alowercase letter. All vectors are column vectors. Particularly, the vector of all ones is denoted by 1 and the vector of all zeros is denoted by 0 whose dimensions are up to the content when it appears. For any positive integer n, we denote [n] := {1,2,...,n}. For simplicity, we use (x,y) to denote (x⊤,y⊤)⊤; Rn to denote non-negative n-dimensional orthant + Rn := {(x ,...,x )⊤|x > 0,i ∈ [n]}; sign(·) to denote the sign function, i.e., for any + 1 n i scalar a > 0, sign(a) = 1 if a > 0, and sign(a) = 0 if a = 0. For any vector u > 0, we use sign(u) to denote a vector whose i-thelement being sign(u ). Fortwo n-dimensional i vectors u and v, u◦v := (u v ,...,u v )⊤. For a matrix Φ ∈ Rm×n and any given index 1 1 n n set I ⊂ [n], Ic := [n]\I is the complement set of I, #(I) denotes the cardinality of I, i.e., the number of elements of I; Φ and x denote a sub-matrix and a #(I)-dimensional I I vector constructed by columns of Φ and elements of x corresponding to indices of index set I, respectively. Define box set B (r) := {x ∈ Rn | kxk 6 r} for any given r > 0. ∞ ∞ 2 Preliminaries In this section, we give some basic concepts and derive several simple results, which will be used in later sections 3 Definition 2.1 (i) Define a function f : Rn → Rn by + + f(x) = (f (x ),f (x ),...,f (x ))⊤, (2.1) 1 1 2 2 n n where f : R → R for all i ∈ [n]. The (vector-valued) function f is said to be i + + monotonically nondecreasing if for any two nonnegative vectors u and v, u 6 v implies f(u) 6 f(v). In other words, if 0 6 u 6 v implies f (0) 6 f (u ) 6 f (v ) i i i i i i i for any i ∈ [n]. (ii) Let F : Rn → R . The (real-valued) function F is said to be monotonically + + nondecreasing if for any two nonnegative vectors u and v, u 6 v implies F(u) 6 F(v). Two monotonically nondecreasing functions are given in the following examples, which will be frequently used in our subsequent analysis. Example 2.1 Suppose that f : Rn → Rn is defined by (2.1) with f (x ) = sign(x ) for + + i i i any i ∈ [n]. We denote this function by sign(·). It is easy to see that sign(·) is a monotonically nondecreasing function. Example 2.2 Suppose that f : Rn → Rn is defined by (2.1) with f (x ) = xp for any + + i i i i ∈ [n] and p ∈ (0,1]. It is easy to see that this function is monotonically nondecreasing. For any given vectors u and v with 0 6 u 6 v and a monotonically nondecreasing function f, we have n n 1⊤f(u) = f (u ) 6 f (v ) = 1⊤f(v). i i i i i=1 i=1 X X Therefore, 1⊤f is monotonically nondecreasing. Let D ⊆ Rn be an arbitrarily given non-empty set, and Rn D˜R2n := {(x,y) | x ∈ DRn,|x| 6 y}. Considering the following two optimization problems with non-empty solution sets: (P ) : min 1Tf(|x|) and (P ) : min 1Tf(y), x xy x∈DRn (x,y)∈D˜R2n we have the following result. Lemma 2.1 Suppose that f : Rn → Rn is monotonically nondecreasing. Then, problem + + (P ) is equivalent to problem (P ) in the sense that if x∗ is an optimal solution of x xy problem (P ), then there exists a vector y∗ such that (x∗,y∗) solves (P ); and if (x∗,y∗) x xy is an optimal solution of problem (P ), then x∗ solves (P ). xy x 4 Proof. Assume that x∗ ∈ D is an optimal solution of problem (P ), then Rn x 1⊤f(|x∗|) 6 1⊤f(|x|), ∀x ∈ D . Rn Let y∗ = |x∗|, then (x∗,y∗) = (x∗,|x∗|) ∈ D˜R2n. Since f is monotonically nondecreasing, it follows that 1⊤f(y∗) = 1⊤f(|x∗|) 6 1⊤f(|x|) 6 1⊤f(y) for any (x,y) ∈ D˜R2n. That is, (x∗,y∗) = (x∗,|x∗|) ∈ D˜R2n is an optimal solution of problem (P ). xy Conversely, assume that (x∗,y∗) is an optimal solution of problem (P ), then x∗ ∈ xy D ,|x∗| 6 y∗ and Rn 1⊤f(y∗) 6 1⊤f(y), ∀(x,y) ∈ D˜R2n. Furthermore, for any x ∈ DRn, let y = |x|, then (x,y) ∈ D˜R2n. Since f is monotonically nondecreasing, it follows that 1⊤f(|x∗|) 6 1⊤f(y∗) 6 1⊤f(y) = 1⊤f(|x|), ∀x ∈ D . Rn That is, x∗ ∈ D is an optimal solution of problem (P ). ✷ Rn x Corollary 2.1 Suppose that Dˆ is a non-empty subset of Rn, l and u satisfying l < 0 < Rn u are two given vectors in Rn and v = max{−l,u} whose i-th componentis max{−l ,u }. i i Denote D := {x | x ∈ Dˆ ,l 6 x 6 u}, Rn Rn D˜R2n := {(x,y) | x ∈ DRn,|x| 6 y}, DR2n := {(x,y) | x ∈ DRn,|x| 6 y,0 6 y 6 v}. If f : Rn → Rn is monotonically nondecreasing, then problem + + min 1⊤f(|x|) (2.2) x∈DRn is equivalent to problem min 1⊤f(y) (2.3) (x,y)∈DR2n in the sense that if x∗ is an optimal solution of problem (2.2), then there exists a vector y∗ such that (x∗,y∗) solves problem (2.3); and if (x∗,y∗) is an optimal solution of problem (2.3), then x∗ solves problem (2.2). Proof. By Lemma 2.1, problem (2.2) is equivalent to problem min 1⊤f(y). (2.4) (x,y)∈D˜R2n 5 So, it is sufficient to show that problem (2.4) is equivalent to problem (2.3). Assume that (x∗,y∗) is an optimal solution of problem (2.4), then x∗ ∈ D and Rn |x∗| 6 y∗. Lety˜∗ = |x∗|, then(x∗,y˜∗)isanoptimalsolutionofproblem(2.3). Otherwise, suppose that there exists (xˆ,yˆ) ∈ DR2n ⊆ D˜R2n such that 1⊤f(yˆ) < 1⊤f(y˜∗). Since |xˆ| 6 yˆ and f is monotonically nondecreasing, it follows that 1⊤f(|xˆ|) 6 1⊤f(yˆ) < 1⊤f(y˜∗) = 1⊤f(|x∗|) 6 1⊤f(y∗), which contradicts the assumption that (x∗,y∗) is an optimal solution of problem (2.4). Conversely, assume that (x∗,y∗) is an optimal solution of problem (2.3). Let y˜∗ = |x∗|, then (x∗,y˜∗) ∈ D˜R2n must be an optimal solution of problem (2.4). Otherwise, suppose that there exists (xˆ,yˆ) ∈ D˜R2n such that 1⊤f(yˆ) < 1⊤f(y˜∗). Since |xˆ| 6 yˆ and f is monotonically nondecreasing, we have 1⊤f(|xˆ|) 6 1⊤f(yˆ) < 1⊤f(y˜∗) = 1⊤f(|x∗|) 6 1⊤f(y∗). Let zˆ = |xˆ|, then it is easy to see that (xˆ,ˆz) ∈ DR2n and 1⊤f(zˆ) < 1⊤f(y∗), which contradicts the assumption that (x∗,y∗) is an optimal solution of problem (2.3). ✷ The following two results can be obtained easily; and hence, we omit their proofs. Lemma 2.2 Suppose that x∗ ∈ D ⊆ D˜ ⊆ Rn is an optimal solution of minf(x). Then, x∈D˜ x∗ is an optimal solution of minf(x). x∈D Lemma 2.3 Suppose that D ⊆ D˜ ⊆ Rn. If D contains all optimal solutions of minf(x), x∈D˜ then minf(x) is equivalent to minf(x). x∈D x∈D˜ 3 Reformulations of Problems (P ) and (P ) 0 p To achieve our main results, we need to confine the solution sets of problems (P ) 0 and (P ) in a same bounded set. To this end, we need to discuss the boundedness of the p solution sets of problem (P ) and problem (P ), respectively. 0 p 3.1 Boundedness of the Solution Set of Problem (P ) 0 Consider the classical ℓ -minimization in the case of CS: 0 minkxk s.t. Φx = b , (3.1) 0 ǫ 6 where b ∈ Rm with some fixed ǫ = (ǫ ,...,ǫ )⊤ ∈ {−1,1}m. For problem (3.1), we ǫ 1 m assume that every optimal solution has exactly s nonzero components. We denote the solution set of problem (3.1) by S . ǫ Lemma 3.1 Given an arbitrarily optimal solution xˆ ∈ S , we denote its support set by ǫ I , i.e., I = {i | xˆ 6= 0,i ∈ [n]} with #(I ) = s. Then the corresponding sub-matrix xˆ xˆ i xˆ Φ has full column rank. Ixˆ Proof. Hereafter, for given x , we use x ∈ Rn to denote an expanded n-dimensional I vector defined by x := x and x := 0. I I Ic Since xˆ ∈ S , it holds that b = Φ xˆ . Suppose that Φ is not full column rank, then ǫ ǫ Ixˆ Ixˆ Ixˆ there are more than one solution for the linear equation Φ x = b . Thus, there exists Ixˆ Ixˆ ǫ x∗ with x∗ 6= xˆ such that Φ x∗ = b . Let Ixˆ Ixˆ Ixˆ Ixˆ Ixˆ ǫ xt = txˆ +(1−t)x∗ , ∀t ∈ R, Ixˆ Ixˆ Ixˆ then, xt is a solution of Φ xt = b for any t ∈ R. Ixˆ Ixˆ Ixˆ ǫ Since x∗ 6= xˆ , there exists some i ∈ I such that Ixˆ Ixˆ xˆ (x∗ ) = d 6= (xˆ ) = c 6= 0. Ixˆ i Ixˆ i Let t := d and x0 := xt0, then we have 0 d−c Ixˆ Ixˆ (x0 ) = (t xˆ +(1−t )x∗ ) = 0. Ixˆ i 0 Ixˆ 0 Ixˆ i Notice that xˆ and x∗ have the same support set I . However, the support set of x0 is xˆ a subset of I \ {i}, which implies that x0 is a sparser solution of problem (3.1) than xˆ xˆ. Thus, it yields a contradiction to the assumption that there are exactly s nonzero elements for the optimal solution of problem (3.1). Therefore, Φ must have full column Ixˆ ✷ rank. We complete the proof. Remark 3.1 From the above proof, it follows that there do not exist xˆ,x¯ ∈ S with ǫ xˆ 6= x¯ such that I = I . xˆ x¯ Corollary 3.1 Problem (3.1) has finitely many optimal solutions. Proof. Suppose that each optimal solution of problem (3.1), say xˆ, has s (6 m) nonzero components. By Lemma 3.1, Φ has full column rank. Such sub-matrices of Φ have Ixˆ at most Cs. Thus, by Remark 3.1, it follows that the number of optimal solutions of n problem (3.1) with exactly s nonzero components is no more than Cs. ✷ n 7 Corollary 3.2 Problem (P ) has finitely many optimal solutions. 0 Proof. Assume that the solution set of problem (P ) is denoted by S. It is easy to see 0 that S ⊆ S . ǫ ǫ∈{−1,1}m [ Since the number of elements of every S is finite by Corollary 3.1, it follows from the ǫ above formula that the number of elements of S is finite. ✷ Theorem 3.1 Suppose that every optimal solution of problem (3.1) has exactly s (6 m) nonzero components. Then, problem (3.1) is equivalent to minkxk s.t. Φx = b ,kxk 6 cǫ, (3.2) 0 ǫ ∞ 0 where cǫ := max max|((ΦTΦ )−1ΦTb ) | > 0 (3.3) 0 I I I ǫ i I⊂[n],#(I)=s i∈I with every Φ being full column rank. I Proof. Suppose that xˆ is an arbitrarily optimal solution of problem (3.1) with support set I and #(I ) = s. Then we have that Φ xˆ = b . By Lemma 3.1, Φ has full xˆ xˆ Ixˆ Ixˆ ǫ Ixˆ column rank, and hence, xˆ = (ΦT Φ )−1ΦT b . Obviously, Ixˆ Ixˆ Ixˆ Ixˆ ǫ kxˆk = kxˆ k = max|((ΦT Φ )−1ΦT b ) | 6 cǫ, ∞ Ixˆ ∞ i∈Ixˆ Ixˆ Ixˆ Ixˆ ǫ i 0 where cǫ is defined by (3.3). Therefore, xˆ is contained in the box B (cǫ). Furthermore, 0 ∞ 0 the feasible set of problem (3.2) contains all optimal solutions of problem (3.1). By ✷ Lemma 2.3, problem (3.1) is equivalent to problem (3.2). The proof is complete. Theorem 3.2 Suppose that every optimal solution of problem (P ) has exactly s (6 m) 0 nonzero components. Then, problem (P ) is equivalent to 0 minkxk s.t. |Φx| = b,kxk 6 c := max cǫ > 0, (3.4) 0 ∞ 0 0 ǫ∈{−1,1}m where cǫ is defined by (3.3). 0 Proof, Let S be the optimal solution set of problem (P ), then 0 S ⊆ S . ǫ ǫ∈{−1,1}m [ By Theorem 3.1, every S is contained in the box ǫ B (cǫ) := {x ∈ Rn | kxk 6 cǫ}. ∞ 0 ∞ 0 Then, S is contained in B (cǫ) ⊆ B (c ) with c being given by (3.4). Hence, the ǫ ∞ 0 ∞ 0 0 feasible set of problem (3.4) contains all optimal solutions of problem (P ). By Lemma 0 2.3, problem (3.4) is equiSvalent to problem (P ). The proof is complete. ✷ 0 8 3.2 Boundedness of Solution Set to Problem (P ) p We will bound all optimal solutions of problem (P ) via considering the following p problem with any given p ∈ (0,1): minkxkp s.t. Φx = b , (3.5) p ǫ where Φ ∈ Rm×n is full row rank and b = b◦ ǫ ∈ Rm. We denote the solution set of ǫ problem (3.5) by Sp. ǫ Lemma 3.2 The solution set Sp of problem (3.5) is contained in the box B (cǫ), where ǫ ∞ p cǫ := n· sup |(Φ⊤(ΦΦ⊤)−1b ) |. p ǫ i 16i6n ✷ Proof. This result holds from [20, Remark 1], here we omit the proof. Now, we consider problem (P ) with p ∈ (0,1). p Theorem 3.3 Problem (P ) is equivalent to p minkxkp s.t. |Φx| = b,kxk 6 c := max cǫ > 0, (3.6) p ∞ 1 p ǫ∈{−1,1}m where cǫ is defined in Lemma 3.2. p Proof. Let S be the solution set of problem (P ), then p p S ⊆ Sp. p ǫ ǫ∈{−1,1}m [ Since every Sp is contained in the box B (cǫ) by Lemma 3.2, it follows from the above ǫ ∞ p equality that S is contained in B (cǫ) ⊆ B (c ) with c being defined by (3.6). p ǫ ∞ p ∞ 1 1 Hence, the feasible set of problem (3.6) contains all optimal solutions of problem (P ). p By Lemma 2.3, problem (3.6) is eSquivalent to problem (P ). The proof is complete. ✷ p 3.3 Reformulations of Problems (P ) and (P ) 0 p Define a constant r := max{c ,c } > 0, (3.7) 0 0 1 where c ,c are given in (3.4) and (3.6), respectively. 0 1 9 Remark 3.2 From the results of the former two subsections, we can see that (P ) minkxk s.t. |Φx| = b,kxk 6 r 0bd 0 ∞ 0 and (P ) minkxkp s.t. |Φx| = b,kxk 6 r pbd p ∞ 0 are equivalent to problem (P ) and problem (P ), respectively. 0 p Introduce a variable y ∈ Rn such that |x| 6 y. Noting that |x| 6 y implies that −y 6 x 6 y, or equivalently, −x − y 6 0 and x − y 6 0. Define two block matrices Ψ := (−E , E)⊤ and Γ := (−E,−E)⊤ where E ∈ Rn×n is the n×n identity matrix, then it is easy to verify that |x| 6 y if and only if Ψx+Γy 6 0. Let Φx = b ,Ψx+Γy 6 0 ǫ T := (x,y) ∈ R2n −r 1 6 x 6 r 1 , ǫ 0 0  (cid:12)  (cid:12) 0 6 y 6 r 1  0  (cid:12) (cid:12) where ǫ ∈ {−1,1}m ⊂ Rm and b = b(cid:12)◦ǫ, then it holds that  ǫ (cid:12)  |Φx| = b,kxk 6 r T := (x,y) ∈ R2n ∞ 0 = T . (3.8) |x| 6 y,0 6 y 6 r 1 ǫ 0 (cid:26) (cid:12) (cid:27) ǫ∈{−1,1}m (cid:12) [ (cid:12) It is easy to see that (cid:12) T,T ⊆ [−r ,r ]n ×[0,r ]n ⊂ R2n. ǫ 0 0 0 Now, we consider the following two problems with the same feasible set: minkyk s.t. (x,y) ∈ T, (3.9) 0 (x,y) minkykp s.t. (x,y) ∈ T. (3.10) p (x,y) Theorem 3.4 Problem (P ) is equivalent to problem (3.9), and problem (P ) is equiv- 0bd pbd alent to problem (3.10). Proof. From Examples 2.1 and 2.2, we know that both k·k and k·kp are monotonically 0 p nondecreasing when they are confined from Rn to R . Let + + Dˆ := x ∈ Rn |Φx| = b , Rn D := {nx | x ∈ Dˆ(cid:12) ,l 6 xo6 u}, Rn (cid:12)Rn DR2n := {(x,y) |(cid:12)x ∈ DRn,|x| 6 y,0 6 y 6 v}, where l = −r 1 < 0 and u = r 1 > 0, and hence, v = max{−l,u} = r 1. By a direct 0 0 0 ✷ application of Corollary 2.1, we complete the proof. 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.