Structural focalization Robert J. Simmons Focusing,introducedbyJean-MarcAndreoliinthecontextofclassicallinearlogic[Andreoli1992], definesanormalformforsequentcalculusderivationsthatcutsdownonthenumberofpossible derivationsbyeagerlyapplyinginvertiblerulesandgroupingsequencesofnon-invertiblerules. A focused sequent calculus is defined relative to some non-focused sequent calculus; focalization is thepropertythateverynon-focusedderivationcanbetransformedintoafocusedderivation. In this paper, we present a focused sequent calculus for propositional intuitionistic logic and prove the focalization property relative to a standard presentation of propositional intuitionistic logic. Comparedtoexistingapproaches,theproofisquiteconcise,dependingonlyontheinternal soundness and completeness of the focused logic. In turn, both of these properties can be es- tablished(andmechanicallyverified)bystructuralinductioninthestyleofPfenning’sstructural cuteliminationwithouttheneedforanytediousandrepetitiousinvertibilitylemmas. Theproof of cut admissibility for the focused system, which establishes internal soundness, is not particu- larly novel. The proof of identity expansion, which establishes internal completeness, is a major contributionofthiswork. DraftasofJanuary22,2014 1. INTRODUCTION The propositions of intuitionistic propositional logic are easily recognizable and standard: we will consider a logic with atomic propositions, falsehood, disjunction, truth, conjunction, and implication. P,Q ::= p|⊥|P ∨P |(cid:62)|P ∧P |P ⊃P 1 2 1 2 1 2 Thesequentcalculuspresentationforintuitionisticlogicisalsostandard;thesystem inFigure1ispreciselythepropositionalfragmentofKleene’ssequentsystemG as 3 presented in [Pfenning 2000]. Contexts Γ are, as usual, considered to be unordered multisets of propositions P, and the structural properties of exchange, weakening, and contraction are admissible (each left rule implicitly contains a contraction). Sequent calculi are a nice way of presenting logics, and a logic’s sequent calculus presentation is a convenient setting in which to establish the logic’s metatheory in a way that is straightforwardly mechanizable in proof assistants (like Twelf or Agda) that are organized around the idea of structural induction. There are two key metatheoretic properties that we are interested in. The first, cut admissibility, justifies the use of lemmas: if we know P (if we have a derivation of the sequent Γ −→ P) and we know that Q follows from assuming P (if we have a derivation of the sequent Γ,P −→ Q), then we can come to know Q without the additional assumption of P (we can obtain a derivation of the sequent Γ −→ Q). 1 A proof of the cut admissibility property establishes the internal soundness of a logic 1In common practice, the words proof and derivation are used interchangeably. In this article, we will be careful to refer to the formal objects constructed using sequent calculus rules (such as thosein Figure 1) asderivations. Except when discussingnatural deduction, the wordsproof andtheoremwillrefertotheoremsprovedabouttheseformalobjects;thesearefrequentlycalled metatheoremsintheliterature. · 2 Robert J. Simmons Γ−→P Γ,p−→p init (no rule ⊥R) Γ,⊥−→Q ⊥L Γ−→P1 Γ−→P2 ∨R1 ∨R2 Γ−→P1∨P2 Γ−→P1∨P2 Γ,P1∨P2,P1−→Q Γ,P1∨P2,P2−→Q ∨L Γ,P1∨P2−→Q Γ−→P1 Γ−→P2 Γ−→(cid:62) (cid:62)R (no rule (cid:62)L) Γ−→P1∧P2 ∧R Γ,P1∧P2,P1−→Q Γ,P1∧P2,P2−→Q ∧L1 ∧L2 Γ,P1∧P2−→Q Γ,P1∧P2−→Q Γ,P1−→P2 Γ,P1⊃P2−→P1 Γ,P1⊃P2,P2−→Q ⊃R ⊃L Γ−→P1⊃P2 Γ,P1⊃P2−→Q Fig.1. Sequentcalculusforintuitionisticlogic. – it implies that there are no closed proofs of contradiction, even by circuitous reasoning using lemmas. The identity property asserts that assuming P is always sufficienttoconcludeP,thatis,thatthesequentΓ,P −→P isalwaysderivable. A proof of the identity property establishes the internal completeness of a logic. We call these properties internal, following Pfenning [2010], to emphasize that these are properties of the deductive system itself and not a comment on the system’s relationship to any external semantics. Thereisatraditioninlogic,datingbacktoGentzen[1935],thatviewsthesequent calculus as a convenient formalism for proving a logic’s metatheoretic properties while viewing natural deduction proofs as the “true proof objects.”2 One reason for this bias towards natural deduction is that natural deduction proofs have nice normalization properties. A natural deduction proof is normal if if there are no instances of an introduction rule immediately followed by an elimination rule of the same connective; such detours give rise to local reductions which eliminate the detour, such as this one: D D 1 2 P true P true 1 2 ∧ P1∧P2true ∧ I D1 P true E1 =⇒ P true 1 R 1 The normalization property says that every natural deduction proof can be trans- formed into a normal natural deduction proof. We are frequently interested in the set of normal natural deduction proofs of a given proposition. As an example, there is exactly one normal natural deduction prooffor(p∧q)⊃(r∧s)⊃(p∧r). Presentedasaderivation,thatnaturaldeduction 2Thisdiscussionassumesabasicfamiliaritywithnaturaldeduction. Werefertheinterestedreader to Girard, Taylor, and Lafont’s Proofs and Types [Girard et al. 1989]; the aforementioned quote comesfromSection5.4ofthatwork. · Structural focalization 3 proof looks like this: hyp hyp u v p∧qtrue r∧strue ∧ ∧ E1 E1 ptrue rtrue ∧ I p∧rtrue ⊃v (r∧s)⊃(p∧r)true I ⊃u (p∧q)⊃(r∧s)⊃(p∧r)true I Underthestandardprooftermassignmentfornaturaldeduction,this(normal)nat- ural deduction proof corresponds to the (irreducible) proof term λx.λy.(cid:104)π x,π y(cid:105). 1 1 Incontrast, therearemanysequentcalculusderivationsofthesameproposition. Here’s one of them: init init p∧q,r∧s,p−→p p∧q,r∧s,r −→r ∧ ∧ L1 L1 p∧q,r∧s−→p p∧q,r∧s−→r ∧ R p∧q,r∧s−→p∧r ⊃ R p∧q −→(r∧s)⊃(p∧r) ⊃ R ·−→(p∧q)⊃(r∧s)⊃(p∧r) Reading from bottom to top, this derivation decomposes (p∧q)⊃(r∧s)⊃(p∧r) ontheright,thendecomposes(r∧s)⊃(p∧r)ontheright,thendecomposesp∧ron theright, andthen(inonebranch)decomposesp∧q ontheleftwhile(intheother branch)decomposingr∧sontheleft. Otherpossibilitiesincludedecomposingp∧q on the left before decomposing (r∧s)⊃(p∧r) on the right and decomposing r∧s on the left before p∧r on the right; there are at least six different derivations even if you don’t count derivations that do useless decompositions on the left. These different derivations are particularly problematic if our goal is to do proof searchforsequentcalculusderivations,asinessentialdifferencesbetweenderivations correspond to unnecessary choice points that a proof search procedure will need to backtrack over. It was in this context that Andreoli originally introduced the idea of focusing. Some connectives, such as implication A⊃B, are called asynchronous becausetheirrightrulescanalwaysbeappliedeagerly,withoutbacktracking,during bottom-up proof search. Other connectives, such as disjunction A∨B, are called synchronous because their right rules cannot be applied eagerly. For instance, the ∨ rule cannot be applied eagerly if we are looking for a proof of p −→ ⊥∨p. R1 This asynchronous or synchronous character is the connective’s polarity.3 3Andreolidealtwithaone-sidedclassicalsequentcalculus;inintuitionisticlogic,itiscommonto call asynchronous connectives right-asynchronous and left-synchronous. Similarly, it is common tocallsynchronousconnectivesright-synchronousandleft-asynchronous. Synchronicity or polarity, a property of connectives, is closely connected to (and sometimes conflatedwith)apropertyofrulescalledinvertibility;aruleisinvertibleiftheconclusionofthe ruleimplieseachofthepremises. So⊃Risinvertible(Γ−→P1⊃P2impliesΓ,P1−→P2)but⊃L is not (Γ,P1⊃P2 −→C does not imply Γ,P1⊃P2 −→P1). Rules that can be applied eagerly need to be invertible, so asynchronous connectives have invertible right rules and synchronous connectives have invertible left rules. Therefore, another synonym for asynchronous is right- invertible, and another synonym for synchronous is left-invertible. This terminology would be misleadinginourcase,asboththeleftandrightrulesforconjunctioninFigure1areinvertible. · 4 Robert J. Simmons Andreoli’s key observation was that proof search only needs to consider deriva- tions that have two alternating phases. In inversion phases, we eagerly apply (invertible) right rules to asynchronous connectives and (invertible) left rules to synchronous ones. When this is no longer possible, we begin a focusing phase by putting a single remaining proposition in focus, repeatedly decomposing it (and only it) by applying right rules to synchronous connectives and left rules to asyn- chronous ones. Andreoli described this restricted form of proof search as a regular proof search procedure in a restricted sequent calculus; such sequent calculi, and derivations in them, are called focused as opposed to unfocused [Andreoli 1992]. In order to adopt such a proof search strategy, it is important to know that the strategy is both sound (i.e., the proof search strategy will only say “the sequent hasaderivation”ifthatisthecase)andcomplete(i.e., theproofsearchstrategyis capable of finding a proof if one exists). Soundness proofs for focusing are usually easy: focused derivations are essentially a syntactic refinement of the unfocused derivations. Completeness, the nontrivial direction, involves turning unfocused derivations into focused ones. This process is focalization.4 Thus, an effective procedureforfocalizationisaconstructivewitnesstothecompletenessoffocusing. The techniques described in this article are general and can be straightforwardly transferred to other modal and substructural logics, as explored in the author’s dissertation [Simmons 2012]. Our approach has three key components, which we will now discuss in turn. Focalization via cut and identity. Existingfocalizationproofsalmostallfallprey totheneedtoprovemultipletediousinvertibilitylemmasdescribingtheinteraction of each rule with every other rule; this results in proofs that are unrealistic to write out, difficult to check, and exhausting to contemplate mechanizing. The way forwardwasfirstsuggestedbyChaudhuri[2006]. Inhisdissertation,heestablished the focalization property for linear logic as the consequence of the focused logic’s internal soundness (the cut admissibility property) and completeness (the identity property). Statingandprovingtheidentitypropertyforafocusedsequentcalculus has remained a challenge, however. A primary contribution of this work is identity expansion,ageneralizationoftheidentitypropertythatisamenabletomechanized proof by structural induction on propositions. This identity property is, in turn, part of our larger development, a proof of the focalization property that entirely avoidsthetediousinvertibilitylemmasthatplagueexistingapproaches. (Wereview existing techniques used to prove the focalization property in Section 6.) Refining the focused calculus. The focused logic presented in this article is es- sentially equivalent to the presentation of LJF given by Liang and Miller [2009], a point we will return to in Section 2.1. A reader familiar with LJF will note three non-cosmetic differences. The first two, our use of a polarized variant of intuition- 4Theusageoffocus,focusing,focussing,andfocalizationisnotstandardintheliterature. Weuse the words focus and focusing to describe a logic (e.g. the focused sequent calculus) and aspects of that logic (e.g. focused derivations, propositions in focus, left- or right-focused sequents, and focusing phases). Focalization, derived from the French focalisation, is reserved exclusively for theactofproducingafocusedderivationgivenanunfocusedderivation;thefocalizationproperty establishesthatfocalizationisalwayspossible. · Structural focalization 5 istic logic and our novel treatment of atomic and suspended propositions, will be discussed further in Section 2. A third change is that LJF does not force any par- ticular ordering for the application of rules during a inversion phase. A seemingly inevitable consequence of this choice is that the proof of focalization must estab- lish the equivalence of all permutations of these invertible rules; this is one of the aforementioned tedious invertibility lemmas that plague proofs of the focalization property. Ourfocusedlogic,likemanyothers(includingAndreoli’soriginalsystem) fixes a particular inversion order. We introduce a new calculus rather than reusing an existing one in order to present a focused logic and focalization proof that is computationally clean and straightforward to mechanize. Our desire to mechanize proofs of the focalization property also informed our decision to use propositional intuitionistic logic. All proofsinthisarticlearemechanizedinbothTwelf[PfenningandSchu¨rmann1999] and Agda [Norell 2007], though we will only mention the Twelf development. The concretebasisforourclaimofcomputationalcleanlinessisthatourmechanizations are complete artifacts capturing the constructive content of the proofs we present, and the size of this artifact scales linearly relative to the number of connectives; the approaches we call “tedious” tend to scale quadratically. Proof terms. SinceAndreoli’soriginalwork,focusedsequentcalculusderivations havebeenshowntobeisomorphictonormalnaturaldeductionproofsforrestricted fragments of logic [Cervesato and Pfenning 2003] and variations on the usual fo- cusing discipline [Howe 2001]. Such results challenge the position that natural deductionproofsaresomehowmorefundamentalthansequentcalculusderivations andalsoindicatethatfocalizationisafundamentalpropertyoflogic. InSection2.4, wepresentaprooftermlanguageforpolarizedintuitionisticlogicthatdirectlycap- tures the branching and binding structure of focused derivations. The result is a term language generalizing the spine form of Cervesato and Pfenning [2003]. By presenting our cut admissibility and identity expansion proofs at the level of proofterms,weemphasizethattheconstructivecontentofacutadmissibilityproof is a substitution function on proof terms generalizing the hereditary substitution of Watkins et al. [2002] in a spine form setting. Similarly, the constructive content of our identity expansion proof is a novel η-expansion property on proof terms. 1.1 Outline This article is dealing with three “soundness” properties and three “completeness” properties, so it is important to carefully explain what we’re doing and when; the following discussion is represented graphically in Figure 2. Wepresentanewproofofthecompletenessoffocusing(thefocalizationproperty, Theorem 4) for intuitionistic logic; the proof of the focalization property follows from the internal soundness and completeness of the focused sequent calculus (cut admissibility, Theorem 2, and identity expansion, Theorem 3). We will start, in Section2,bymotivatingapolarizedpresentationoflogicthatsyntacticallydifferen- tiates the synchronous and asynchronous propositions. We then present a focused sequent calculus for polarized propositional intuitionistic logic and formally state the soundness and completeness of focusing. We also prove the soundness of fo- cusing (the de-focalization property, Theorem 1) in this section, but it’s pretty · 6 Robert J. Simmons “Soundness” “Completeness” Cut admissibility Identity expansion Theorem 2, Theorem 3, Section 3 Section 4 Focused, polarized Unfocused admissibility lemmas Section 5.1 De-focalization Focalization (cid:108) Theorem 1, Theorem 4, Section 2.5 Section 5.2 Cut admissibility Identity Unfocused, Corollary 1, Corollary 2, unpolarized Section 5.3 Section 5.3 Fig.2. Theoremsandtheirdependencies. boring and independent of the proofs of cut admissibility, identity expansion, and the completeness of focusing. Internal soundness for the focused sequent calculus is established by the cut ad- missibility theorem in Section 3, and internal completeness for the focused sequent calculus is established in Section 4 using a generalization of the identity expansion theorem first developed in [Simmons and Pfenning 2011b]. In Section 5 we prove the focalization property by showing unfocused admissibility, a group of lemmas establishing that the focused sequent calculus can act like an unfocused sequent calculus. Finally, rather than proving the internal soundness and completeness (cut and identity) for the unfocused system directly, we show that these properties canbeestablishedascorollariesofthefirstfourtheorems. InSection6weconclude with an overview of existing proofs of the focalization property. We will henceforth avoid using the words soundness and completeness as much aspossible. Instead,wewillrefertothecutadmissibilityandidentitytheoremsfor the focused and unfocused sequent calculi by name, and will refer to the soundness and completeness of focusing as de-focalization and focalization, respectively. 2. POLARIZED LOGIC ThereisasignificantlineofworkonpolarityinlogicdatingbacktoAndreoli[1992] and Girard [1993]. That line of work holds that polarity is a refinement property of propositions, that the asynchronous and synchronous propositions are syntactic refinements of the set of all propositions. We can determine the synchronous or · Structural focalization 7 (p+)•=p+ (p−)•=p− (↓A−)•=(A−)• (↑A+)•=(A+)• (⊥)•=⊥ (A+∨B+)•=(A+)•∨(B+)• (A+⊃B−)•=(A+)•⊃(B−)• ((cid:62)+)•=(cid:62) ((cid:62)−)•=(cid:62) (A+∧+B+)•=(A+)•∧(B+)• (A−∧−B−)•=(A−)•∧(B−)• Fig.3. Erasureofpolarizedpropositions. asynchronous character of a proposition by inspecting its outermost connective.5 Ina1991notepublishedtotheLINEARlist[Girard1991],Girardintroducedthe idea of syntactically differentiating the positive propositions (those Andreoli called synchronous) from the negative propositions (those Andreoli called asynchronous) while mediating between the two with shifts: the upshift ↑A+ includes positive propositions in the negative ones, and the downshift ↓A− includes negative propo- sitions in the positive ones. This polarization6 of logic was developed further by Girard in Ludics [Girard 2001] and treated extensively in Laurent and Zeilberger’s Ph.D.theses[Laurent2002; Zeilberger2009b]. Thesearethepropositions, positive and negative, for polarized intuitionistic logic: A+,B+,C+ ::= p+ |↓A− |⊥|A+∨B+ |(cid:62)+ |A+∧+B+ A−,B−,C− ::= p− |↑A+ |A+ ⊃B− |(cid:62)− |A−∧−B− Linearlogicisabletounambiguouslyassignallconnectivestoonecategoryorthe other,butinintuitionisticlogic,truth(cid:62)andconjunctionP ∧P canbeunderstood 1 2 as having either a positive character (corresponding to 1 and A+ ⊗B+ in linear logic) or a negative character (corresponding to (cid:62) and A−&B− in linear logic). We take the maximally general approach and allow both versions of truth and conjunction, which are decorated to emphasize their polarity. TheshiftsintroducedbyGirardweremodalitiesthatmightchangetheprovability of a proposition. We adopt the later stance of Zeilberger [2009b], McLaughlin and Pfenning [2009], and others: shifts influence the structure of proofs, but not the provability of propositions. Therefore, we expect there to be a focused proof of A+ or A− if and only if there is an unfocused proof of (A+)• or (A−)•, where (−)• is the erasure function given in Figure 3. The polarity of an atomic proposition can be arbitrary as long as it is consistent, as if each individual atomic proposition p is really intrinsically positive or negative, but the calculus in Figure 1 didn’t notice. Shifts and polarization are computationally interesting phenomena. Our view of polarization lines up with the call-by-push-value system independently developed 5Linearlogicnaturallyhastwopolarities. Othersystems,likeGirard’sLUandLiangandMiller’s LKU, use more than these two polarities [Girard 1993; Liang and Miller 2011]. Furthermore, in LUthepolarityofapropositionisdeterminedbymorethanjusttheoutermostconnective. 6For the purposes of this article we are making somewhat artificial distinction between polarity, Andreoli’s classification of propositions as asynchronous and synchronous, and polarization, the segregationofthesetwoclassesaspositiveornegativepropositionsbyusingshifts. Thisdistinction isnotastandardone: “synchronous”and“positive”areelsewhereusedinterchangeably,“polarity” isusedtodescribewhywecall“polarization,”andsoon. · 8 Robert J. Simmons by Levy: positive propositions correspond to value types and negative propositions correspond to computation types [Levy 2004]. Shifts are also useful in theorem proving. By employing different polarization strategies, the name for partial in- verses of erasure, the Imogen theorem prover can simulate fully-focused LJF proof search, proof search in an unfocused logic like Kleene’s G , and proof search in 3 many other partially-focused systems in between [McLaughlin and Pfenning 2009]. Furthermore, forward-chaining versus backward-chaining logic programming can be seen as arising from particular polarization strategies in uniform proof search [Chaudhurietal.2008]. However,inthisarticlepolarizationisprimarilyatechnical device. Shifts makes it clearer that certain proofs are structurally inductive over propositions, especially identity expansion (Theorem 3). Additionally, the focus- interrupting nature of the shift is a critical part of our unfocused admissibility lemmas.7 2.1 Sequent calculus We will develop our focused sequent calculus in two stages; in the first stage we do not consider atomic propositions. We can present sequents for our polarized logic in two equivalent ways. In the one-sequent view, we say that all sequents have the formΓ;L(cid:96)U. Thecomponentsofasequentaredefinedbythefollowing(currently incomplete) grammar: Hypothetical contexts Γ::=·|Γ,A− |... Inversion contexts Ω::=·|A+,Ω Antecedents L::=Ω|[A−] Succedents U ::=[A+]|A+ |A− |... This first view requires us to further restrict the form of sequents for two reasons. First, we only want to focus on one proposition at a time, so only one right focus [A+] or left focus [A−] should be present in a sequent. Second, focus and inversion phases should not overlap, so it must be the case that L = · when U = [A+] and that U (cid:54)= A− when L = [A−]. Given this restriction, the first view of sequents is equivalent to a three-sequent view in which there are three different sequent forms: —Right focus: Γ(cid:96)[A+], where L=· and is therefore omitted, —Inversion: Γ;Ω(cid:96)U, where U (cid:54)=[A+], and —Left focus: Γ;[A−](cid:96)U, where U is stable (more about this shortly). The sequent calculus for polarized intuitionistic logic in Figure 4 is presented in termsofthisthree-sequentview. Theright focussequentΓ(cid:96)[A+]describesastate in which non-invertible right rules are being applied to positive propositions, the left focus sequent Γ;[A−] (cid:96) U describes a state in which non-invertible left rules 7SomethinglikeashiftisnecessaryforLiangandMiller’sfocalizationproofsaswell. Becausetheir logic has polarity but no shifts, they construct delays δ+(A)=(cid:62)+∧+A and δ−(A)=(cid:62)+ ⊃A to force asynchronous or synchronous connectives to behave (respectively) like synchronous or asynchronous ones [Liang and Miller 2009]. This gets in the way of defining logical fragments byaddingorremovingconnectives,asthepresenceoftheconnective(cid:62)+ isfundamentaltotheir completenessproof. · Structural focalization 9 Γ(cid:96)[A+]–right focus Γ;·(cid:96)A− ↓ Γ(cid:96)[↓A−] R Γ(cid:96)[A+] Γ(cid:96)[B+] (no rule ⊥R) Γ(cid:96)[A+∨B+] ∨R1 Γ(cid:96)[A+∨B+] ∨R2 Γ(cid:96)[A+] Γ(cid:96)[B+] (cid:62)+ ∧+ Γ(cid:96)[(cid:62)+] R Γ(cid:96)[A+∧+B+] R Γ;Ω(cid:96)U –inversion, U (cid:54)=[A+] Γ(cid:96)[A+] Ustable Γ,A−;[A−](cid:96)U foc foc Γ;·(cid:96)A+ R Γ,A−;·(cid:96)U L A+stable Γ,A−;Ω(cid:96)U Γ;↓A−,Ω(cid:96)U ↓L Γ;A+,Ω(cid:96)U Γ;B+,Ω(cid:96)U Γ;⊥,Ω(cid:96)U ⊥L Γ;A+∨B+,Ω(cid:96)U ∨L Γ;Ω(cid:96)U Γ;A+,B+,Ω(cid:96)U (cid:62)+ ∧+ Γ;(cid:62)+,Ω(cid:96)U L Γ;A+∧+B+,Ω(cid:96)U L Γ;·(cid:96)A+ Γ;A+(cid:96)B− Γ;·(cid:96)↑A+ ↑R Γ;·(cid:96)A+⊃B− ⊃R Γ;·(cid:96)A− Γ;·(cid:96)B− (cid:62)− ∧− Γ;·(cid:96)(cid:62)− R Γ;·(cid:96)A−∧−B− R Γ;[A−](cid:96)U –left focus, U must be stable Γ;A+(cid:96)U Γ(cid:96)[A+] Γ;[B−](cid:96)U Γ;[↑A+](cid:96)U ↑L Γ;[A+⊃B−](cid:96)U ⊃L Γ;[A−](cid:96)U Γ;[B−](cid:96)U ∧− ∧− (no rule (cid:62)−) Γ;[A−∧−B−](cid:96)U L1 Γ;[A−∧−B−](cid:96)U L2 L Fig.4. Focusedsequentcalculusforpolarizedintuitionisticlogic(sanssuspendedpropositions). are being applied to negative propositions, and the inversion sequent Γ;Ω (cid:96) U describes everything else. While we treat the hypothetical context Γ informally as a multiset, the inversion context Ω is not a multiset. Instead, it should be thought of as an ordered sequence of positive propositions: the empty sequence is written as “·” and “,” is an associative append operator. Whenever the context Ω is non- empty, the only applicable rule is the one that decomposes the left-most positive connective in the context. The picture in Figure 4 is quite uniform: every rule except for foc and foc R L breaks down a single connective, while the shift rules regulate the focus and in- version phases. Read from bottom up, they put an end to the process of breaking · 10 Robert J. Simmons down a proposition under focus (↓ , ↑ ) and to the process of breaking down a R L proposition with inversion (↓ , ↑ ). All other rules maintain focus or inversion on L R the subformulas of a proposition. The conclusions of foc and foc are inversion sequents with empty inversion R L contexts and succedents U that are stable, meaning that there is no possibility of applying an invertible rule. These stable sequents (sometimes called neutral) have animportantplaceinfocusedsequentcalculi. Intheintroduction,weclaimedthat there was an essential equivalence between LJF and our focused presentation, but this is only true if we ignore the internal structure of focus and inversion phases and work with synthetic rules, the derivation fragments comprised of one or more inversion phases stacked on top of a single focused phase. In the synthetic view of focusing, we abstract away from the internal structure of focusing phases to emphasize the stable sequents that lie between them [Andreoli 2001]. LJF and our presentation of polarized intuitionistic logic have different internal structure, but we claim that the systems give rise to the same synthetic rules.8 Aside from our use of shifts, we depart from Liang and Miller in ways that are technically relevant but that are invisible at the level of synthetic connectives. The stability requirement for U in left focus sequents Γ;[A−] (cid:96) U can equiva- lently be stated as a extra premise Ustable for the rule ↑ , which is done in the L accompanying Twelf development. 2.2 Suspended propositions The pleasant picture of focusing given above is complicated when we consider atomicpropositions. Atomicpropositionsarebestunderstoodasstand-insforarbi- trarypropositions,andsoourpolarizedlogichasbothpositiveatomicpropositions (stand-ins for arbitrary positive propositions) and negative atomic propositions (stand-ins for arbitrary negative propositions). Whenweareperforminginversionandwereachanatomicproposition,wedonot have enough information to break down that proposition any further, but we have not reached a shift. We have to do something different. What we do is suspend that atomic proposition, either in the hypothetical context or in the succedent. We represent a suspended atomic proposition as (cid:104)p+(cid:105) or (cid:104)p−(cid:105). If we wanted to closely follow existing focused sequent calculi, we would introduce two more rules forprovingatomicpropositionsinfocususingasuspendedatomicproposition. The resulting extension of Figure 4 would look something like this: Γ,(cid:104)p+(cid:105);Ω(cid:96)U Γ;·(cid:96)(cid:104)p−(cid:105) Γ;p+,Ω(cid:96)U Γ;·(cid:96)p− Γ,(cid:104)p+(cid:105)(cid:96)[p+] Γ;[p−](cid:96)(cid:104)p−(cid:105) (cid:104)p−(cid:105)stable This treatment is not incorrect and is obviously analogous to the init rule from the unfocused system in Figure 1. Nevertheless, we contend that this is a design error and a large part of why it has historically been difficult to prove the identity theorem for focused systems. We generalize these rules by allowing the hypothet- ical context to contain arbitrary suspended positive propositions (not just atomic 8Ourpresentationdoesnothavethefirst-orderquantifierspresentinLJF,andLJFlacksanegative unit(cid:62)−,butquantifierscanbeaddedtooursystemeasily,andthesameistruefor(cid:62)− inLJF.
Description: