Why should we care about syntactic proofs if we can show semantically that statements are true?Henkin-style...

A question about 'reptile and volatiles' to describe creatures

Pretty heat maps

How to translate this word-play with the word "bargain" into French?

Is The Lion King live action film made in motion capture?

What are good ways to improve as a writer other than writing courses?

Does this Foo machine halt?

Is TA-ing worth the opportunity cost?

Is refreshing multiple times a test case for web applications?

Why should we care about syntactic proofs if we can show semantically that statements are true?

Can I call myself an assistant professor without a PhD

Circle around all points of a layer

What is the idiomatic way of saying “he is ticklish under armpits”?

Does two puncture wounds mean venomous snake?

Generator for parity?

Looking for a new job because of relocation - is it okay to tell the real reason?

How to help new students accept function notation

Best gun to modify into a monsterhunter weapon?

Should I self-publish my novella on Amazon or try my luck getting publishers?

Dropdowns & Chevrons for Right to Left languages

During the Space Shuttle Columbia Disaster of 2003, Why Did The Flight Director Say, "Lock the doors."?

Why does this Pokémon I just hatched need to be healed?

Is this cheap "air conditioner" able to cool a room?

Blocking people from taking pictures of me with smartphone

Why does Intel's Haswell chip allow multiplication to be twice as fast as addition?



Why should we care about syntactic proofs if we can show semantically that statements are true?


Henkin-style proof of first-order compactnessAbsurd = Inconsistent?Modus Ponens: implication versus entailmentDistinction between equality, logical equivalence and biconditionality.Proofs as implication and proving implicationsSemantics of Tautological Entailmentimportance of implication vs its tautologyTruth tables in propositional calculus: semantic or syntactic in nature?Trying to get my logic terminology straightTrying to get all this terminology straight in logicWhat does semantic entailment even mean, in the context of completeness?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







4












$begingroup$


I am talking about classical logic here.



I admit this might be a naive question, but as far as I understand it:
Syntactic entailment means there is a proof using the syntax of the language, while on the other hand semantic entailment does not care about the syntax, it simply means that a statement must be true if a set of other statements are also true.



That being said, isn't semantic entailment sufficient to know whether or not a statement is true? Why do we need syntactic proofs?



Granted I know that in the case of boolean logic, proving statements by truth tables gets intractable very fast, but in essence, isn't semantic entailment "superior"? As it does not rely on how we construct the grammar?



Thank you



Edit: Suppose it wasn't the case that finding a satisfying assignment to an arbitrary boolean statement is an exponentially increasing problem, then would we even need syntactic entailment?










share|cite|improve this question







New contributor



Threnody is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$














  • $begingroup$
    You might be interested in "The Problem of Logical Omniscience" as, e.g., described in Chapter 5 here by Scott Aaronson.
    $endgroup$
    – Jair Taylor
    8 hours ago


















4












$begingroup$


I am talking about classical logic here.



I admit this might be a naive question, but as far as I understand it:
Syntactic entailment means there is a proof using the syntax of the language, while on the other hand semantic entailment does not care about the syntax, it simply means that a statement must be true if a set of other statements are also true.



That being said, isn't semantic entailment sufficient to know whether or not a statement is true? Why do we need syntactic proofs?



Granted I know that in the case of boolean logic, proving statements by truth tables gets intractable very fast, but in essence, isn't semantic entailment "superior"? As it does not rely on how we construct the grammar?



Thank you



Edit: Suppose it wasn't the case that finding a satisfying assignment to an arbitrary boolean statement is an exponentially increasing problem, then would we even need syntactic entailment?










share|cite|improve this question







New contributor



Threnody is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$














  • $begingroup$
    You might be interested in "The Problem of Logical Omniscience" as, e.g., described in Chapter 5 here by Scott Aaronson.
    $endgroup$
    – Jair Taylor
    8 hours ago














4












4








4


3



$begingroup$


I am talking about classical logic here.



I admit this might be a naive question, but as far as I understand it:
Syntactic entailment means there is a proof using the syntax of the language, while on the other hand semantic entailment does not care about the syntax, it simply means that a statement must be true if a set of other statements are also true.



That being said, isn't semantic entailment sufficient to know whether or not a statement is true? Why do we need syntactic proofs?



Granted I know that in the case of boolean logic, proving statements by truth tables gets intractable very fast, but in essence, isn't semantic entailment "superior"? As it does not rely on how we construct the grammar?



Thank you



Edit: Suppose it wasn't the case that finding a satisfying assignment to an arbitrary boolean statement is an exponentially increasing problem, then would we even need syntactic entailment?










share|cite|improve this question







New contributor



Threnody is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$




I am talking about classical logic here.



I admit this might be a naive question, but as far as I understand it:
Syntactic entailment means there is a proof using the syntax of the language, while on the other hand semantic entailment does not care about the syntax, it simply means that a statement must be true if a set of other statements are also true.



That being said, isn't semantic entailment sufficient to know whether or not a statement is true? Why do we need syntactic proofs?



Granted I know that in the case of boolean logic, proving statements by truth tables gets intractable very fast, but in essence, isn't semantic entailment "superior"? As it does not rely on how we construct the grammar?



Thank you



Edit: Suppose it wasn't the case that finding a satisfying assignment to an arbitrary boolean statement is an exponentially increasing problem, then would we even need syntactic entailment?







logic






share|cite|improve this question







New contributor



Threnody is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










share|cite|improve this question







New contributor



Threnody is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|cite|improve this question




share|cite|improve this question






New contributor



Threnody is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








asked 8 hours ago









ThrenodyThrenody

624 bronze badges




624 bronze badges




New contributor



Threnody is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




Threnody is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

















  • $begingroup$
    You might be interested in "The Problem of Logical Omniscience" as, e.g., described in Chapter 5 here by Scott Aaronson.
    $endgroup$
    – Jair Taylor
    8 hours ago


















  • $begingroup$
    You might be interested in "The Problem of Logical Omniscience" as, e.g., described in Chapter 5 here by Scott Aaronson.
    $endgroup$
    – Jair Taylor
    8 hours ago
















$begingroup$
You might be interested in "The Problem of Logical Omniscience" as, e.g., described in Chapter 5 here by Scott Aaronson.
$endgroup$
– Jair Taylor
8 hours ago




$begingroup$
You might be interested in "The Problem of Logical Omniscience" as, e.g., described in Chapter 5 here by Scott Aaronson.
$endgroup$
– Jair Taylor
8 hours ago










2 Answers
2






active

oldest

votes


















7












$begingroup$

Fist of all, let met set the terminology straight:



By a syntactic proof ($vdash$) we mean a proof purely operating on a set of rules that manipulate strings of symbols, without talking about semantic notations such as assignment, truth, model or interpretation. A syntactic proof system is one that says, for examhple, "If you have $A$ written on one line and $B$ on another line, then you are allowed to write the symbols $A land B$ on a line below that." Examples of syntactic proof systems are natural deduction in its various flavors or Beth tableaus aka truth trees.



By a semantic proof ($vDash$) we mean a proof operating on the semantic notions of the language, such as assignment, truth, model or interpretation. Examples of semantic proofs are truth tables, presentation of (counter) models or arguments in text (along the lines of "Suppose $A to B$ is true. Then there is an assignment such that...").



Furthermore, the term "entailment" is usually understood as a purely semantic notion ($vDash$), while the syntacic counterpart ($vdash$) is normally referred to as derivability.



(The division "$vDash$ = semantics/models and $vdash$ = syntax/proofs" is oversimplifying matters a bit -- proof theoretic semantics, for example, argues that a semantics can be established in terms of formal (= "syntactic") proofs rather than just by model-theoretic considerations, but for the sake of this explanation, let's keep this more simple two-fold distinction up.)



I'm clarifying this because the way you set out things is not entirely accurate:




Syntactic entailment means there is a proof using the syntax of the
language




In a way yes, the syntax of a logic is always relevant when talking about notions such as entailment or derivability -- but the crucial feature that makes us call this notion syntactic is not because the syntax of the language is involved in establishing entailment or derivability relations but because the set of rules we use is purely syntactic, i.e. merely operating on strings of symbols without making explicit reference to meaning.




while on the other hand semantic entailment does not care about the syntax




That's not quite accurate -- in order to establish the truth value of a formula and hence notions such as validity or entailment, we have to investigate the syntax of a formula in order to determine any truth value at all. After all, truth is defined inductively on the structure (= the syntax) of formulas: "$[[A land B]]_v = text{true iff} [[A]]_v = text{true and } [[B]]_v = text{true}...$" If we didn't care about syntax, then we couldn't talk about semantics at all.





Now to your actual question:




Why should we care about syntactic proofs if we can show semantically that statements are true?




The short answer is: Because syntactic proofs are often a lot easier.



For propositional logic, the world is still relatively innocent: We can just write down a truth table, look at the truth value at each formula and decide whether or not it is the case that all the lines where the columns of all of the premises have a "true" also have the conclusion column as "true". As you point out, this procedure quickly explodes for formulas with many propositional variables, but it's still a deterministic procedure that's doable in finite time.



We could also present a natural language proof arguing in terms of truth assignments. This can be a bit more cumbersome, but might be more instructive, and is still relatively handlable for the relatively simple language and interpretation of propositional logic.



But things get worse when we go into first-order logic. Here we are confronted with formulas that quantify over models whose domains are potentially infinite. Even worse, in contrast to propositional logic where the number of assignments is (~= interpretations) always finite and completely determined by the number of propositional variables, the structures (~= interpretations) in which a first-order formula might or might not be true are unlimited in size and shape. That is, not only can the structures themselves be infinite, but we are now facing an infinite amount of structures in which our formulas can be interpreted in the first place. Simply writing down a truth table will no longer work for the language of predicate logic, so determining the truth value, and hence semantic properties and relations such as validity and entailment, is no longer a simple deterministic procedure for predicate logic.



So if we want to present a semantic proof, we have to revert to arguments about the structures that a formula does or does not satisfy. This is where an interesting duality enters:





  • To prove that




    • an existentially quantified semantic meta-statement is true (For example "The
      formula $phi$ is satisfiable", i.e. "There exists a model of $phi$) or

    • a universally quantifed semantic meta-statement is false (For example $not vDash phi$, "The
      formula $phi$ is not valid", i.e. "It is not the case that all structures satisfy $phi$)


    it suffices to provide one (counter)model and we're done: If we find just one structure in which $phi$ is true then we know that $phi$ is satisifiable, and conversely, if we find one structure in which $phi$ is not true then we now that $phi$ is not valid.



    Analogously, to prove that




    • an existentially quantified object-language formula ($exists x ...$) is true in a structure or

    • a universally quantified object-language formula ($forall x ...$) is false in a structure,


    it suffices to find one element in the structure's domain that that provides an example for the existentially quantified formula or, respectively, a counterexample to the universal quantification and we're done.




However,





  • to prove that




    • a universally quantifed semantic meta-statement is true (For example $vDash phi$, "The
      formula $phi$ is valid", i.e. "All structures satisfy $phi$), or

    • an existentially quantified semantic meta-statement is false (For example "The
      formula $phi$ is unsatisfiable", i.e. "There exists no model of $phi$),


    we are suddenly faced with the difficult task of making a claim about all possible structures. We can not simply list them, as there are ininitely many of them, so all we can do is write a natural-language text arguing over the possible truth values of formulas eventually showing that all structures must succeed or fail to fulfill a certain requirement.



    Analogously, to prove that




    • an universally quantified object-language formula ($forall x ...$) is true in a structure or

    • an existentially quantified object-language formula ($exists x ...$) is false in a structure,


    we have to iterate over all the elements in the structure's domain. If the domain is finite, we are lucky and can simply go through all of the possible values (although this may take quite some time if the domain is large enough), but if it is infinite, there is no way we can ever get done if we just brute force check the formula for the elements one after the other.




This is a rather unpleasant situation, and exactly the point where syntactic proofs come in very handy.



Recall the definition of entailment:




$Gamma vDash phi$ iff all interpretations that satisfy all formulas in $Gamma$ also satisfy $phi$




or equivalently




$Gamma vDash phi$ iff there is no interpretation that satisfies all formulas in $Gamma$ but not $phi$.




This is precisely the kind of universal quantification that makes purely semantic proofs difficult: We would have to establish a proof over the infinity of all the possible structures to check whether the semantic entailment relation does or does not hold.

But now look at the definition of syntactic derivability:




$Gamma vdash phi$ iff there is a derivation with premises from $Gamma$ and conclusion $phi$.




The nasty universal quantifier suddenly became an existential one! Rather than having to argue over all the possible structures, it now suffices to show just one syntactic derivation and we're done. (The same applies to the case where we don't have any premises, i.e. $vDash phi$ ("$phi$ is valid" = "true in all structures" vs. $vdash phi$ (= "$phi$ is derivable" = "there is a derivation with no open assumptions and $phi$ as the conclusion). This is an enormous advantage -- call it "superior" if you want.



Now we have a kind of disparity: For some things semantics is hard whereas syntax is easy, so how can we use this disparity for good?

Luckily, in the case of classical logic, we are equipped with soundness and completeness:




Soundness: If $Gamma vdash phi$, then $Gamma vDash phi$ -- if we found a syntactic derivation, then we know that the entailment holds semantically.



Completeness: If $Gamma vDash phi$, then $Gamma vdash phi$ -- if a semantic entailment holds, then we can find a syntactic derivation.




While any reasonable derivation system will be sound w.r.t. to the language's semantics, completeness is a non-trivial and very powerful result: If we want to prove a semantic entailment, by completeness we know that there must be a syntactic derivation, so we can go find just one such derivation, and as soon as we did, soundness ensures us that this is indeed a proof that the entailment holds semantically. Hence, we can use syntactic proofs to avoid cumbersome semantic arguments that involve meta-logical quantification over all structures. This is pretty neat.



Now note how things turn around for the syntactic calculus:




  • To prove that


    • a universally quantifed syntactic meta-statement is true or an existentially quantified semantic meta-statement is false (For example $not vdash phi$, "The
      formula $phi$ is underivable", i.e. "There is no derivation with conclusion $phi$"/"All attempts to find a derivation with conclusion $phi$ eventually fail)




we would have to argue over all possible syntactic proofs, which can again be difficult.



Now we can apply the soundness and completness results in the other direction: If we want to show that a formula is not derivable, then by contraposition on completeness we know it is not valid (because if it were, then by completeness there would be a derivation), so we can carry out a semantic proof by providing just one countermodel to the validity of $phi$ and we're almost done; beause then again by contraposition on soundness, we can be sure that if the formula is not valid, there will be no derivation (because if there were a derivation for something that is not semantially valid, our system would be unsound), so we have our proof of the underivability without needing to argue over hypothetical derivations that can't exist.



And this is precisely how the aforementioned duality comes about:



--------------------------------------------------------------------------------
semantic syntactic
--------------------------------------------------------------------------------
positive ⊨ ⊢
universal quantif. existential quantif.
("all structures"/ ("there is a derivation"/
"no structure such that not") "not all derivations fail")
=> difficult 🙁 => easy 🙂

negated ⊭ ⊬
negated universal quantif. negated existential quantif.
("not all structures"/ ("there is no syntatic proof"/
"there exists a countermodel") "all attempts at proofs fail")
=> easy 🙂 => difficult 🙁
--------------------------------------------------------------------------------


Thanks to soundness and completness, the duality of semantic and a syntactic proofs can help us bridge the difficult parts:





  • $vDash$ ("all structures" -- difficult) $ xrightarrow{text{completeness}} $
    $vdash$ ("some derivation" -- easy) $ xrightarrow{text{soundness}} $ $vDash$


  • $not vdash$ ("no derivation" -- difficult) $ xrightarrow{text{contrapos. completeness}} $ $not vDash$ ("some countermodel" -- easy) $ xrightarrow{text{contrapos. soundness}} $ $not vdash$


Puting these briges into the picture from before:



------------------------------------------------------------------------------
semantic syntactic
------------------------------------------------------------------------------

completeness
------------->
positive 🙁 ⊨ ⊢ 🙂
<-------------
soundness

contrapos. completeness
<-----------------------
negated 🙂 ⊭ ⊬ 🙁
----------------------->
contrapos. soundness
------------------------------------------------------------------------------


I think the existence of syntactic calculi already is wonderful enough simply for the mathematical beauty of this symmetry.






share|cite|improve this answer











$endgroup$















  • $begingroup$
    This is all I could have hoped for and more! Thank you very much for the time you took to answer this!
    $endgroup$
    – Threnody
    2 hours ago










  • $begingroup$
    If I may ask for clarification: Γ⊨ϕ iff all interpretations that satisfy all formulas in Γ also satisfy ϕ By interpretation do you mean assignment? I am still new to the language used in these subjects
    $endgroup$
    – Threnody
    2 hours ago












  • $begingroup$
    @Threnody This was intentionally left unspecific because the underlying idea is the same for both propositional and first-order logic (but I probably should have worded it more clearly) - in the case of propositional logic, interpretations will be assignments, and in the case of first-order logic, interpretations will be structures.
    $endgroup$
    – lemontree
    2 hours ago



















4












$begingroup$

The good news is that for first-order logic we have the soundness and completeness theorems: For any first-order theory $T$, and any sentence $sigma$ in the same language, $T models sigma$ if and only if $T vdash sigma$. That is, semantic and syntactic truth are equivalent.



In light of that theorem, you could, if you wish, focus entirely on semantic truth (as is sometimes done by model theorists) or entirely on syntactic truth (as is sometimes done by proof theorists). Each area has its own questions and techniques, but they are closely linked, so there is also opportunity for mixing the two.



One direct consequence of the completeness theorem is the compactness theorem, which is absolutely central to first-order model theory. It says that if every finite subset of some theory $T$ has a model, then $T$ itself has a model. You can prove this by observing that if $T$ had no model then you could write a proof of a contradiction from $T$. Since every proof is finite, only finitely many statements from $T$ are needed in your proof, so there must be a finite subset of $T$ that proves a contradiction, and hence has no models. (It's possible to prove compactness by methods that feel more semantic, like ultraproducts, but the proof sketched here is the first one that many people see).






share|cite|improve this answer









$endgroup$











  • 1




    $begingroup$
    What 's the point of talking about compactness theorem with respect to OP's question?
    $endgroup$
    – Ruggiero Rilievi
    8 hours ago










  • $begingroup$
    @RuggieroRilievi It gives an application of syntactic methods to semantics. Of course, you don't need to prove compactness via completeness. There's the ultraproduct-based proof. Moreover, there is also a Henkin-style proof of compactness which seems to be folklore - replace the provability relation $vdash$ with the finitary entailment relation given by $Gammamodels_{fin}varphi$ iff $Gamma'modelsvarphi$ for some finite $Gamma'subseteqGamma$, and run the argument as usual. Personally, I love this Henkin argument since it can let you "backsolve" for a sound and complete proof system.
    $endgroup$
    – Noah Schweber
    8 hours ago










  • $begingroup$
    @NoahSchweber Do you have a reference for that Henkin argument? I'm personally a fan of ultraproducts (mostly because of their topological interpretation), but I'd be very interested to see the proof you're describing.
    $endgroup$
    – Chris Eagle
    8 hours ago










  • $begingroup$
    @ChrisEagle I've never actually seen it written up, sadly, and it's a bit long for a comment here. But it really is just a quick find-and-replace. You extend the language/theory as usual (but with $models_{fin}$ in place of $vdash$), and then look at the structure consisting of equivalence classes of terms modulo finitary-entailment-equivalence. Personally my favorite part about it is that it suggests the idea of looking at generalizations of term models, where we replace provability modulo a theory with some other consequence relation.
    $endgroup$
    – Noah Schweber
    8 hours ago










  • $begingroup$
    If you ask a question here, though, I'd be happy to write it up in some detail - it really should appear more explicitly.
    $endgroup$
    – Noah Schweber
    8 hours ago














Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});






Threnody is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3318422%2fwhy-should-we-care-about-syntactic-proofs-if-we-can-show-semantically-that-state%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









7












$begingroup$

Fist of all, let met set the terminology straight:



By a syntactic proof ($vdash$) we mean a proof purely operating on a set of rules that manipulate strings of symbols, without talking about semantic notations such as assignment, truth, model or interpretation. A syntactic proof system is one that says, for examhple, "If you have $A$ written on one line and $B$ on another line, then you are allowed to write the symbols $A land B$ on a line below that." Examples of syntactic proof systems are natural deduction in its various flavors or Beth tableaus aka truth trees.



By a semantic proof ($vDash$) we mean a proof operating on the semantic notions of the language, such as assignment, truth, model or interpretation. Examples of semantic proofs are truth tables, presentation of (counter) models or arguments in text (along the lines of "Suppose $A to B$ is true. Then there is an assignment such that...").



Furthermore, the term "entailment" is usually understood as a purely semantic notion ($vDash$), while the syntacic counterpart ($vdash$) is normally referred to as derivability.



(The division "$vDash$ = semantics/models and $vdash$ = syntax/proofs" is oversimplifying matters a bit -- proof theoretic semantics, for example, argues that a semantics can be established in terms of formal (= "syntactic") proofs rather than just by model-theoretic considerations, but for the sake of this explanation, let's keep this more simple two-fold distinction up.)



I'm clarifying this because the way you set out things is not entirely accurate:




Syntactic entailment means there is a proof using the syntax of the
language




In a way yes, the syntax of a logic is always relevant when talking about notions such as entailment or derivability -- but the crucial feature that makes us call this notion syntactic is not because the syntax of the language is involved in establishing entailment or derivability relations but because the set of rules we use is purely syntactic, i.e. merely operating on strings of symbols without making explicit reference to meaning.




while on the other hand semantic entailment does not care about the syntax




That's not quite accurate -- in order to establish the truth value of a formula and hence notions such as validity or entailment, we have to investigate the syntax of a formula in order to determine any truth value at all. After all, truth is defined inductively on the structure (= the syntax) of formulas: "$[[A land B]]_v = text{true iff} [[A]]_v = text{true and } [[B]]_v = text{true}...$" If we didn't care about syntax, then we couldn't talk about semantics at all.





Now to your actual question:




Why should we care about syntactic proofs if we can show semantically that statements are true?




The short answer is: Because syntactic proofs are often a lot easier.



For propositional logic, the world is still relatively innocent: We can just write down a truth table, look at the truth value at each formula and decide whether or not it is the case that all the lines where the columns of all of the premises have a "true" also have the conclusion column as "true". As you point out, this procedure quickly explodes for formulas with many propositional variables, but it's still a deterministic procedure that's doable in finite time.



We could also present a natural language proof arguing in terms of truth assignments. This can be a bit more cumbersome, but might be more instructive, and is still relatively handlable for the relatively simple language and interpretation of propositional logic.



But things get worse when we go into first-order logic. Here we are confronted with formulas that quantify over models whose domains are potentially infinite. Even worse, in contrast to propositional logic where the number of assignments is (~= interpretations) always finite and completely determined by the number of propositional variables, the structures (~= interpretations) in which a first-order formula might or might not be true are unlimited in size and shape. That is, not only can the structures themselves be infinite, but we are now facing an infinite amount of structures in which our formulas can be interpreted in the first place. Simply writing down a truth table will no longer work for the language of predicate logic, so determining the truth value, and hence semantic properties and relations such as validity and entailment, is no longer a simple deterministic procedure for predicate logic.



So if we want to present a semantic proof, we have to revert to arguments about the structures that a formula does or does not satisfy. This is where an interesting duality enters:





  • To prove that




    • an existentially quantified semantic meta-statement is true (For example "The
      formula $phi$ is satisfiable", i.e. "There exists a model of $phi$) or

    • a universally quantifed semantic meta-statement is false (For example $not vDash phi$, "The
      formula $phi$ is not valid", i.e. "It is not the case that all structures satisfy $phi$)


    it suffices to provide one (counter)model and we're done: If we find just one structure in which $phi$ is true then we know that $phi$ is satisifiable, and conversely, if we find one structure in which $phi$ is not true then we now that $phi$ is not valid.



    Analogously, to prove that




    • an existentially quantified object-language formula ($exists x ...$) is true in a structure or

    • a universally quantified object-language formula ($forall x ...$) is false in a structure,


    it suffices to find one element in the structure's domain that that provides an example for the existentially quantified formula or, respectively, a counterexample to the universal quantification and we're done.




However,





  • to prove that




    • a universally quantifed semantic meta-statement is true (For example $vDash phi$, "The
      formula $phi$ is valid", i.e. "All structures satisfy $phi$), or

    • an existentially quantified semantic meta-statement is false (For example "The
      formula $phi$ is unsatisfiable", i.e. "There exists no model of $phi$),


    we are suddenly faced with the difficult task of making a claim about all possible structures. We can not simply list them, as there are ininitely many of them, so all we can do is write a natural-language text arguing over the possible truth values of formulas eventually showing that all structures must succeed or fail to fulfill a certain requirement.



    Analogously, to prove that




    • an universally quantified object-language formula ($forall x ...$) is true in a structure or

    • an existentially quantified object-language formula ($exists x ...$) is false in a structure,


    we have to iterate over all the elements in the structure's domain. If the domain is finite, we are lucky and can simply go through all of the possible values (although this may take quite some time if the domain is large enough), but if it is infinite, there is no way we can ever get done if we just brute force check the formula for the elements one after the other.




This is a rather unpleasant situation, and exactly the point where syntactic proofs come in very handy.



Recall the definition of entailment:




$Gamma vDash phi$ iff all interpretations that satisfy all formulas in $Gamma$ also satisfy $phi$




or equivalently




$Gamma vDash phi$ iff there is no interpretation that satisfies all formulas in $Gamma$ but not $phi$.




This is precisely the kind of universal quantification that makes purely semantic proofs difficult: We would have to establish a proof over the infinity of all the possible structures to check whether the semantic entailment relation does or does not hold.

But now look at the definition of syntactic derivability:




$Gamma vdash phi$ iff there is a derivation with premises from $Gamma$ and conclusion $phi$.




The nasty universal quantifier suddenly became an existential one! Rather than having to argue over all the possible structures, it now suffices to show just one syntactic derivation and we're done. (The same applies to the case where we don't have any premises, i.e. $vDash phi$ ("$phi$ is valid" = "true in all structures" vs. $vdash phi$ (= "$phi$ is derivable" = "there is a derivation with no open assumptions and $phi$ as the conclusion). This is an enormous advantage -- call it "superior" if you want.



Now we have a kind of disparity: For some things semantics is hard whereas syntax is easy, so how can we use this disparity for good?

Luckily, in the case of classical logic, we are equipped with soundness and completeness:




Soundness: If $Gamma vdash phi$, then $Gamma vDash phi$ -- if we found a syntactic derivation, then we know that the entailment holds semantically.



Completeness: If $Gamma vDash phi$, then $Gamma vdash phi$ -- if a semantic entailment holds, then we can find a syntactic derivation.




While any reasonable derivation system will be sound w.r.t. to the language's semantics, completeness is a non-trivial and very powerful result: If we want to prove a semantic entailment, by completeness we know that there must be a syntactic derivation, so we can go find just one such derivation, and as soon as we did, soundness ensures us that this is indeed a proof that the entailment holds semantically. Hence, we can use syntactic proofs to avoid cumbersome semantic arguments that involve meta-logical quantification over all structures. This is pretty neat.



Now note how things turn around for the syntactic calculus:




  • To prove that


    • a universally quantifed syntactic meta-statement is true or an existentially quantified semantic meta-statement is false (For example $not vdash phi$, "The
      formula $phi$ is underivable", i.e. "There is no derivation with conclusion $phi$"/"All attempts to find a derivation with conclusion $phi$ eventually fail)




we would have to argue over all possible syntactic proofs, which can again be difficult.



Now we can apply the soundness and completness results in the other direction: If we want to show that a formula is not derivable, then by contraposition on completeness we know it is not valid (because if it were, then by completeness there would be a derivation), so we can carry out a semantic proof by providing just one countermodel to the validity of $phi$ and we're almost done; beause then again by contraposition on soundness, we can be sure that if the formula is not valid, there will be no derivation (because if there were a derivation for something that is not semantially valid, our system would be unsound), so we have our proof of the underivability without needing to argue over hypothetical derivations that can't exist.



And this is precisely how the aforementioned duality comes about:



--------------------------------------------------------------------------------
semantic syntactic
--------------------------------------------------------------------------------
positive ⊨ ⊢
universal quantif. existential quantif.
("all structures"/ ("there is a derivation"/
"no structure such that not") "not all derivations fail")
=> difficult 🙁 => easy 🙂

negated ⊭ ⊬
negated universal quantif. negated existential quantif.
("not all structures"/ ("there is no syntatic proof"/
"there exists a countermodel") "all attempts at proofs fail")
=> easy 🙂 => difficult 🙁
--------------------------------------------------------------------------------


Thanks to soundness and completness, the duality of semantic and a syntactic proofs can help us bridge the difficult parts:





  • $vDash$ ("all structures" -- difficult) $ xrightarrow{text{completeness}} $
    $vdash$ ("some derivation" -- easy) $ xrightarrow{text{soundness}} $ $vDash$


  • $not vdash$ ("no derivation" -- difficult) $ xrightarrow{text{contrapos. completeness}} $ $not vDash$ ("some countermodel" -- easy) $ xrightarrow{text{contrapos. soundness}} $ $not vdash$


Puting these briges into the picture from before:



------------------------------------------------------------------------------
semantic syntactic
------------------------------------------------------------------------------

completeness
------------->
positive 🙁 ⊨ ⊢ 🙂
<-------------
soundness

contrapos. completeness
<-----------------------
negated 🙂 ⊭ ⊬ 🙁
----------------------->
contrapos. soundness
------------------------------------------------------------------------------


I think the existence of syntactic calculi already is wonderful enough simply for the mathematical beauty of this symmetry.






share|cite|improve this answer











$endgroup$















  • $begingroup$
    This is all I could have hoped for and more! Thank you very much for the time you took to answer this!
    $endgroup$
    – Threnody
    2 hours ago










  • $begingroup$
    If I may ask for clarification: Γ⊨ϕ iff all interpretations that satisfy all formulas in Γ also satisfy ϕ By interpretation do you mean assignment? I am still new to the language used in these subjects
    $endgroup$
    – Threnody
    2 hours ago












  • $begingroup$
    @Threnody This was intentionally left unspecific because the underlying idea is the same for both propositional and first-order logic (but I probably should have worded it more clearly) - in the case of propositional logic, interpretations will be assignments, and in the case of first-order logic, interpretations will be structures.
    $endgroup$
    – lemontree
    2 hours ago
















7












$begingroup$

Fist of all, let met set the terminology straight:



By a syntactic proof ($vdash$) we mean a proof purely operating on a set of rules that manipulate strings of symbols, without talking about semantic notations such as assignment, truth, model or interpretation. A syntactic proof system is one that says, for examhple, "If you have $A$ written on one line and $B$ on another line, then you are allowed to write the symbols $A land B$ on a line below that." Examples of syntactic proof systems are natural deduction in its various flavors or Beth tableaus aka truth trees.



By a semantic proof ($vDash$) we mean a proof operating on the semantic notions of the language, such as assignment, truth, model or interpretation. Examples of semantic proofs are truth tables, presentation of (counter) models or arguments in text (along the lines of "Suppose $A to B$ is true. Then there is an assignment such that...").



Furthermore, the term "entailment" is usually understood as a purely semantic notion ($vDash$), while the syntacic counterpart ($vdash$) is normally referred to as derivability.



(The division "$vDash$ = semantics/models and $vdash$ = syntax/proofs" is oversimplifying matters a bit -- proof theoretic semantics, for example, argues that a semantics can be established in terms of formal (= "syntactic") proofs rather than just by model-theoretic considerations, but for the sake of this explanation, let's keep this more simple two-fold distinction up.)



I'm clarifying this because the way you set out things is not entirely accurate:




Syntactic entailment means there is a proof using the syntax of the
language




In a way yes, the syntax of a logic is always relevant when talking about notions such as entailment or derivability -- but the crucial feature that makes us call this notion syntactic is not because the syntax of the language is involved in establishing entailment or derivability relations but because the set of rules we use is purely syntactic, i.e. merely operating on strings of symbols without making explicit reference to meaning.




while on the other hand semantic entailment does not care about the syntax




That's not quite accurate -- in order to establish the truth value of a formula and hence notions such as validity or entailment, we have to investigate the syntax of a formula in order to determine any truth value at all. After all, truth is defined inductively on the structure (= the syntax) of formulas: "$[[A land B]]_v = text{true iff} [[A]]_v = text{true and } [[B]]_v = text{true}...$" If we didn't care about syntax, then we couldn't talk about semantics at all.





Now to your actual question:




Why should we care about syntactic proofs if we can show semantically that statements are true?




The short answer is: Because syntactic proofs are often a lot easier.



For propositional logic, the world is still relatively innocent: We can just write down a truth table, look at the truth value at each formula and decide whether or not it is the case that all the lines where the columns of all of the premises have a "true" also have the conclusion column as "true". As you point out, this procedure quickly explodes for formulas with many propositional variables, but it's still a deterministic procedure that's doable in finite time.



We could also present a natural language proof arguing in terms of truth assignments. This can be a bit more cumbersome, but might be more instructive, and is still relatively handlable for the relatively simple language and interpretation of propositional logic.



But things get worse when we go into first-order logic. Here we are confronted with formulas that quantify over models whose domains are potentially infinite. Even worse, in contrast to propositional logic where the number of assignments is (~= interpretations) always finite and completely determined by the number of propositional variables, the structures (~= interpretations) in which a first-order formula might or might not be true are unlimited in size and shape. That is, not only can the structures themselves be infinite, but we are now facing an infinite amount of structures in which our formulas can be interpreted in the first place. Simply writing down a truth table will no longer work for the language of predicate logic, so determining the truth value, and hence semantic properties and relations such as validity and entailment, is no longer a simple deterministic procedure for predicate logic.



So if we want to present a semantic proof, we have to revert to arguments about the structures that a formula does or does not satisfy. This is where an interesting duality enters:





  • To prove that




    • an existentially quantified semantic meta-statement is true (For example "The
      formula $phi$ is satisfiable", i.e. "There exists a model of $phi$) or

    • a universally quantifed semantic meta-statement is false (For example $not vDash phi$, "The
      formula $phi$ is not valid", i.e. "It is not the case that all structures satisfy $phi$)


    it suffices to provide one (counter)model and we're done: If we find just one structure in which $phi$ is true then we know that $phi$ is satisifiable, and conversely, if we find one structure in which $phi$ is not true then we now that $phi$ is not valid.



    Analogously, to prove that




    • an existentially quantified object-language formula ($exists x ...$) is true in a structure or

    • a universally quantified object-language formula ($forall x ...$) is false in a structure,


    it suffices to find one element in the structure's domain that that provides an example for the existentially quantified formula or, respectively, a counterexample to the universal quantification and we're done.




However,





  • to prove that




    • a universally quantifed semantic meta-statement is true (For example $vDash phi$, "The
      formula $phi$ is valid", i.e. "All structures satisfy $phi$), or

    • an existentially quantified semantic meta-statement is false (For example "The
      formula $phi$ is unsatisfiable", i.e. "There exists no model of $phi$),


    we are suddenly faced with the difficult task of making a claim about all possible structures. We can not simply list them, as there are ininitely many of them, so all we can do is write a natural-language text arguing over the possible truth values of formulas eventually showing that all structures must succeed or fail to fulfill a certain requirement.



    Analogously, to prove that




    • an universally quantified object-language formula ($forall x ...$) is true in a structure or

    • an existentially quantified object-language formula ($exists x ...$) is false in a structure,


    we have to iterate over all the elements in the structure's domain. If the domain is finite, we are lucky and can simply go through all of the possible values (although this may take quite some time if the domain is large enough), but if it is infinite, there is no way we can ever get done if we just brute force check the formula for the elements one after the other.




This is a rather unpleasant situation, and exactly the point where syntactic proofs come in very handy.



Recall the definition of entailment:




$Gamma vDash phi$ iff all interpretations that satisfy all formulas in $Gamma$ also satisfy $phi$




or equivalently




$Gamma vDash phi$ iff there is no interpretation that satisfies all formulas in $Gamma$ but not $phi$.




This is precisely the kind of universal quantification that makes purely semantic proofs difficult: We would have to establish a proof over the infinity of all the possible structures to check whether the semantic entailment relation does or does not hold.

But now look at the definition of syntactic derivability:




$Gamma vdash phi$ iff there is a derivation with premises from $Gamma$ and conclusion $phi$.




The nasty universal quantifier suddenly became an existential one! Rather than having to argue over all the possible structures, it now suffices to show just one syntactic derivation and we're done. (The same applies to the case where we don't have any premises, i.e. $vDash phi$ ("$phi$ is valid" = "true in all structures" vs. $vdash phi$ (= "$phi$ is derivable" = "there is a derivation with no open assumptions and $phi$ as the conclusion). This is an enormous advantage -- call it "superior" if you want.



Now we have a kind of disparity: For some things semantics is hard whereas syntax is easy, so how can we use this disparity for good?

Luckily, in the case of classical logic, we are equipped with soundness and completeness:




Soundness: If $Gamma vdash phi$, then $Gamma vDash phi$ -- if we found a syntactic derivation, then we know that the entailment holds semantically.



Completeness: If $Gamma vDash phi$, then $Gamma vdash phi$ -- if a semantic entailment holds, then we can find a syntactic derivation.




While any reasonable derivation system will be sound w.r.t. to the language's semantics, completeness is a non-trivial and very powerful result: If we want to prove a semantic entailment, by completeness we know that there must be a syntactic derivation, so we can go find just one such derivation, and as soon as we did, soundness ensures us that this is indeed a proof that the entailment holds semantically. Hence, we can use syntactic proofs to avoid cumbersome semantic arguments that involve meta-logical quantification over all structures. This is pretty neat.



Now note how things turn around for the syntactic calculus:




  • To prove that


    • a universally quantifed syntactic meta-statement is true or an existentially quantified semantic meta-statement is false (For example $not vdash phi$, "The
      formula $phi$ is underivable", i.e. "There is no derivation with conclusion $phi$"/"All attempts to find a derivation with conclusion $phi$ eventually fail)




we would have to argue over all possible syntactic proofs, which can again be difficult.



Now we can apply the soundness and completness results in the other direction: If we want to show that a formula is not derivable, then by contraposition on completeness we know it is not valid (because if it were, then by completeness there would be a derivation), so we can carry out a semantic proof by providing just one countermodel to the validity of $phi$ and we're almost done; beause then again by contraposition on soundness, we can be sure that if the formula is not valid, there will be no derivation (because if there were a derivation for something that is not semantially valid, our system would be unsound), so we have our proof of the underivability without needing to argue over hypothetical derivations that can't exist.



And this is precisely how the aforementioned duality comes about:



--------------------------------------------------------------------------------
semantic syntactic
--------------------------------------------------------------------------------
positive ⊨ ⊢
universal quantif. existential quantif.
("all structures"/ ("there is a derivation"/
"no structure such that not") "not all derivations fail")
=> difficult 🙁 => easy 🙂

negated ⊭ ⊬
negated universal quantif. negated existential quantif.
("not all structures"/ ("there is no syntatic proof"/
"there exists a countermodel") "all attempts at proofs fail")
=> easy 🙂 => difficult 🙁
--------------------------------------------------------------------------------


Thanks to soundness and completness, the duality of semantic and a syntactic proofs can help us bridge the difficult parts:





  • $vDash$ ("all structures" -- difficult) $ xrightarrow{text{completeness}} $
    $vdash$ ("some derivation" -- easy) $ xrightarrow{text{soundness}} $ $vDash$


  • $not vdash$ ("no derivation" -- difficult) $ xrightarrow{text{contrapos. completeness}} $ $not vDash$ ("some countermodel" -- easy) $ xrightarrow{text{contrapos. soundness}} $ $not vdash$


Puting these briges into the picture from before:



------------------------------------------------------------------------------
semantic syntactic
------------------------------------------------------------------------------

completeness
------------->
positive 🙁 ⊨ ⊢ 🙂
<-------------
soundness

contrapos. completeness
<-----------------------
negated 🙂 ⊭ ⊬ 🙁
----------------------->
contrapos. soundness
------------------------------------------------------------------------------


I think the existence of syntactic calculi already is wonderful enough simply for the mathematical beauty of this symmetry.






share|cite|improve this answer











$endgroup$















  • $begingroup$
    This is all I could have hoped for and more! Thank you very much for the time you took to answer this!
    $endgroup$
    – Threnody
    2 hours ago










  • $begingroup$
    If I may ask for clarification: Γ⊨ϕ iff all interpretations that satisfy all formulas in Γ also satisfy ϕ By interpretation do you mean assignment? I am still new to the language used in these subjects
    $endgroup$
    – Threnody
    2 hours ago












  • $begingroup$
    @Threnody This was intentionally left unspecific because the underlying idea is the same for both propositional and first-order logic (but I probably should have worded it more clearly) - in the case of propositional logic, interpretations will be assignments, and in the case of first-order logic, interpretations will be structures.
    $endgroup$
    – lemontree
    2 hours ago














7












7








7





$begingroup$

Fist of all, let met set the terminology straight:



By a syntactic proof ($vdash$) we mean a proof purely operating on a set of rules that manipulate strings of symbols, without talking about semantic notations such as assignment, truth, model or interpretation. A syntactic proof system is one that says, for examhple, "If you have $A$ written on one line and $B$ on another line, then you are allowed to write the symbols $A land B$ on a line below that." Examples of syntactic proof systems are natural deduction in its various flavors or Beth tableaus aka truth trees.



By a semantic proof ($vDash$) we mean a proof operating on the semantic notions of the language, such as assignment, truth, model or interpretation. Examples of semantic proofs are truth tables, presentation of (counter) models or arguments in text (along the lines of "Suppose $A to B$ is true. Then there is an assignment such that...").



Furthermore, the term "entailment" is usually understood as a purely semantic notion ($vDash$), while the syntacic counterpart ($vdash$) is normally referred to as derivability.



(The division "$vDash$ = semantics/models and $vdash$ = syntax/proofs" is oversimplifying matters a bit -- proof theoretic semantics, for example, argues that a semantics can be established in terms of formal (= "syntactic") proofs rather than just by model-theoretic considerations, but for the sake of this explanation, let's keep this more simple two-fold distinction up.)



I'm clarifying this because the way you set out things is not entirely accurate:




Syntactic entailment means there is a proof using the syntax of the
language




In a way yes, the syntax of a logic is always relevant when talking about notions such as entailment or derivability -- but the crucial feature that makes us call this notion syntactic is not because the syntax of the language is involved in establishing entailment or derivability relations but because the set of rules we use is purely syntactic, i.e. merely operating on strings of symbols without making explicit reference to meaning.




while on the other hand semantic entailment does not care about the syntax




That's not quite accurate -- in order to establish the truth value of a formula and hence notions such as validity or entailment, we have to investigate the syntax of a formula in order to determine any truth value at all. After all, truth is defined inductively on the structure (= the syntax) of formulas: "$[[A land B]]_v = text{true iff} [[A]]_v = text{true and } [[B]]_v = text{true}...$" If we didn't care about syntax, then we couldn't talk about semantics at all.





Now to your actual question:




Why should we care about syntactic proofs if we can show semantically that statements are true?




The short answer is: Because syntactic proofs are often a lot easier.



For propositional logic, the world is still relatively innocent: We can just write down a truth table, look at the truth value at each formula and decide whether or not it is the case that all the lines where the columns of all of the premises have a "true" also have the conclusion column as "true". As you point out, this procedure quickly explodes for formulas with many propositional variables, but it's still a deterministic procedure that's doable in finite time.



We could also present a natural language proof arguing in terms of truth assignments. This can be a bit more cumbersome, but might be more instructive, and is still relatively handlable for the relatively simple language and interpretation of propositional logic.



But things get worse when we go into first-order logic. Here we are confronted with formulas that quantify over models whose domains are potentially infinite. Even worse, in contrast to propositional logic where the number of assignments is (~= interpretations) always finite and completely determined by the number of propositional variables, the structures (~= interpretations) in which a first-order formula might or might not be true are unlimited in size and shape. That is, not only can the structures themselves be infinite, but we are now facing an infinite amount of structures in which our formulas can be interpreted in the first place. Simply writing down a truth table will no longer work for the language of predicate logic, so determining the truth value, and hence semantic properties and relations such as validity and entailment, is no longer a simple deterministic procedure for predicate logic.



So if we want to present a semantic proof, we have to revert to arguments about the structures that a formula does or does not satisfy. This is where an interesting duality enters:





  • To prove that




    • an existentially quantified semantic meta-statement is true (For example "The
      formula $phi$ is satisfiable", i.e. "There exists a model of $phi$) or

    • a universally quantifed semantic meta-statement is false (For example $not vDash phi$, "The
      formula $phi$ is not valid", i.e. "It is not the case that all structures satisfy $phi$)


    it suffices to provide one (counter)model and we're done: If we find just one structure in which $phi$ is true then we know that $phi$ is satisifiable, and conversely, if we find one structure in which $phi$ is not true then we now that $phi$ is not valid.



    Analogously, to prove that




    • an existentially quantified object-language formula ($exists x ...$) is true in a structure or

    • a universally quantified object-language formula ($forall x ...$) is false in a structure,


    it suffices to find one element in the structure's domain that that provides an example for the existentially quantified formula or, respectively, a counterexample to the universal quantification and we're done.




However,





  • to prove that




    • a universally quantifed semantic meta-statement is true (For example $vDash phi$, "The
      formula $phi$ is valid", i.e. "All structures satisfy $phi$), or

    • an existentially quantified semantic meta-statement is false (For example "The
      formula $phi$ is unsatisfiable", i.e. "There exists no model of $phi$),


    we are suddenly faced with the difficult task of making a claim about all possible structures. We can not simply list them, as there are ininitely many of them, so all we can do is write a natural-language text arguing over the possible truth values of formulas eventually showing that all structures must succeed or fail to fulfill a certain requirement.



    Analogously, to prove that




    • an universally quantified object-language formula ($forall x ...$) is true in a structure or

    • an existentially quantified object-language formula ($exists x ...$) is false in a structure,


    we have to iterate over all the elements in the structure's domain. If the domain is finite, we are lucky and can simply go through all of the possible values (although this may take quite some time if the domain is large enough), but if it is infinite, there is no way we can ever get done if we just brute force check the formula for the elements one after the other.




This is a rather unpleasant situation, and exactly the point where syntactic proofs come in very handy.



Recall the definition of entailment:




$Gamma vDash phi$ iff all interpretations that satisfy all formulas in $Gamma$ also satisfy $phi$




or equivalently




$Gamma vDash phi$ iff there is no interpretation that satisfies all formulas in $Gamma$ but not $phi$.




This is precisely the kind of universal quantification that makes purely semantic proofs difficult: We would have to establish a proof over the infinity of all the possible structures to check whether the semantic entailment relation does or does not hold.

But now look at the definition of syntactic derivability:




$Gamma vdash phi$ iff there is a derivation with premises from $Gamma$ and conclusion $phi$.




The nasty universal quantifier suddenly became an existential one! Rather than having to argue over all the possible structures, it now suffices to show just one syntactic derivation and we're done. (The same applies to the case where we don't have any premises, i.e. $vDash phi$ ("$phi$ is valid" = "true in all structures" vs. $vdash phi$ (= "$phi$ is derivable" = "there is a derivation with no open assumptions and $phi$ as the conclusion). This is an enormous advantage -- call it "superior" if you want.



Now we have a kind of disparity: For some things semantics is hard whereas syntax is easy, so how can we use this disparity for good?

Luckily, in the case of classical logic, we are equipped with soundness and completeness:




Soundness: If $Gamma vdash phi$, then $Gamma vDash phi$ -- if we found a syntactic derivation, then we know that the entailment holds semantically.



Completeness: If $Gamma vDash phi$, then $Gamma vdash phi$ -- if a semantic entailment holds, then we can find a syntactic derivation.




While any reasonable derivation system will be sound w.r.t. to the language's semantics, completeness is a non-trivial and very powerful result: If we want to prove a semantic entailment, by completeness we know that there must be a syntactic derivation, so we can go find just one such derivation, and as soon as we did, soundness ensures us that this is indeed a proof that the entailment holds semantically. Hence, we can use syntactic proofs to avoid cumbersome semantic arguments that involve meta-logical quantification over all structures. This is pretty neat.



Now note how things turn around for the syntactic calculus:




  • To prove that


    • a universally quantifed syntactic meta-statement is true or an existentially quantified semantic meta-statement is false (For example $not vdash phi$, "The
      formula $phi$ is underivable", i.e. "There is no derivation with conclusion $phi$"/"All attempts to find a derivation with conclusion $phi$ eventually fail)




we would have to argue over all possible syntactic proofs, which can again be difficult.



Now we can apply the soundness and completness results in the other direction: If we want to show that a formula is not derivable, then by contraposition on completeness we know it is not valid (because if it were, then by completeness there would be a derivation), so we can carry out a semantic proof by providing just one countermodel to the validity of $phi$ and we're almost done; beause then again by contraposition on soundness, we can be sure that if the formula is not valid, there will be no derivation (because if there were a derivation for something that is not semantially valid, our system would be unsound), so we have our proof of the underivability without needing to argue over hypothetical derivations that can't exist.



And this is precisely how the aforementioned duality comes about:



--------------------------------------------------------------------------------
semantic syntactic
--------------------------------------------------------------------------------
positive ⊨ ⊢
universal quantif. existential quantif.
("all structures"/ ("there is a derivation"/
"no structure such that not") "not all derivations fail")
=> difficult 🙁 => easy 🙂

negated ⊭ ⊬
negated universal quantif. negated existential quantif.
("not all structures"/ ("there is no syntatic proof"/
"there exists a countermodel") "all attempts at proofs fail")
=> easy 🙂 => difficult 🙁
--------------------------------------------------------------------------------


Thanks to soundness and completness, the duality of semantic and a syntactic proofs can help us bridge the difficult parts:





  • $vDash$ ("all structures" -- difficult) $ xrightarrow{text{completeness}} $
    $vdash$ ("some derivation" -- easy) $ xrightarrow{text{soundness}} $ $vDash$


  • $not vdash$ ("no derivation" -- difficult) $ xrightarrow{text{contrapos. completeness}} $ $not vDash$ ("some countermodel" -- easy) $ xrightarrow{text{contrapos. soundness}} $ $not vdash$


Puting these briges into the picture from before:



------------------------------------------------------------------------------
semantic syntactic
------------------------------------------------------------------------------

completeness
------------->
positive 🙁 ⊨ ⊢ 🙂
<-------------
soundness

contrapos. completeness
<-----------------------
negated 🙂 ⊭ ⊬ 🙁
----------------------->
contrapos. soundness
------------------------------------------------------------------------------


I think the existence of syntactic calculi already is wonderful enough simply for the mathematical beauty of this symmetry.






share|cite|improve this answer











$endgroup$



Fist of all, let met set the terminology straight:



By a syntactic proof ($vdash$) we mean a proof purely operating on a set of rules that manipulate strings of symbols, without talking about semantic notations such as assignment, truth, model or interpretation. A syntactic proof system is one that says, for examhple, "If you have $A$ written on one line and $B$ on another line, then you are allowed to write the symbols $A land B$ on a line below that." Examples of syntactic proof systems are natural deduction in its various flavors or Beth tableaus aka truth trees.



By a semantic proof ($vDash$) we mean a proof operating on the semantic notions of the language, such as assignment, truth, model or interpretation. Examples of semantic proofs are truth tables, presentation of (counter) models or arguments in text (along the lines of "Suppose $A to B$ is true. Then there is an assignment such that...").



Furthermore, the term "entailment" is usually understood as a purely semantic notion ($vDash$), while the syntacic counterpart ($vdash$) is normally referred to as derivability.



(The division "$vDash$ = semantics/models and $vdash$ = syntax/proofs" is oversimplifying matters a bit -- proof theoretic semantics, for example, argues that a semantics can be established in terms of formal (= "syntactic") proofs rather than just by model-theoretic considerations, but for the sake of this explanation, let's keep this more simple two-fold distinction up.)



I'm clarifying this because the way you set out things is not entirely accurate:




Syntactic entailment means there is a proof using the syntax of the
language




In a way yes, the syntax of a logic is always relevant when talking about notions such as entailment or derivability -- but the crucial feature that makes us call this notion syntactic is not because the syntax of the language is involved in establishing entailment or derivability relations but because the set of rules we use is purely syntactic, i.e. merely operating on strings of symbols without making explicit reference to meaning.




while on the other hand semantic entailment does not care about the syntax




That's not quite accurate -- in order to establish the truth value of a formula and hence notions such as validity or entailment, we have to investigate the syntax of a formula in order to determine any truth value at all. After all, truth is defined inductively on the structure (= the syntax) of formulas: "$[[A land B]]_v = text{true iff} [[A]]_v = text{true and } [[B]]_v = text{true}...$" If we didn't care about syntax, then we couldn't talk about semantics at all.





Now to your actual question:




Why should we care about syntactic proofs if we can show semantically that statements are true?




The short answer is: Because syntactic proofs are often a lot easier.



For propositional logic, the world is still relatively innocent: We can just write down a truth table, look at the truth value at each formula and decide whether or not it is the case that all the lines where the columns of all of the premises have a "true" also have the conclusion column as "true". As you point out, this procedure quickly explodes for formulas with many propositional variables, but it's still a deterministic procedure that's doable in finite time.



We could also present a natural language proof arguing in terms of truth assignments. This can be a bit more cumbersome, but might be more instructive, and is still relatively handlable for the relatively simple language and interpretation of propositional logic.



But things get worse when we go into first-order logic. Here we are confronted with formulas that quantify over models whose domains are potentially infinite. Even worse, in contrast to propositional logic where the number of assignments is (~= interpretations) always finite and completely determined by the number of propositional variables, the structures (~= interpretations) in which a first-order formula might or might not be true are unlimited in size and shape. That is, not only can the structures themselves be infinite, but we are now facing an infinite amount of structures in which our formulas can be interpreted in the first place. Simply writing down a truth table will no longer work for the language of predicate logic, so determining the truth value, and hence semantic properties and relations such as validity and entailment, is no longer a simple deterministic procedure for predicate logic.



So if we want to present a semantic proof, we have to revert to arguments about the structures that a formula does or does not satisfy. This is where an interesting duality enters:





  • To prove that




    • an existentially quantified semantic meta-statement is true (For example "The
      formula $phi$ is satisfiable", i.e. "There exists a model of $phi$) or

    • a universally quantifed semantic meta-statement is false (For example $not vDash phi$, "The
      formula $phi$ is not valid", i.e. "It is not the case that all structures satisfy $phi$)


    it suffices to provide one (counter)model and we're done: If we find just one structure in which $phi$ is true then we know that $phi$ is satisifiable, and conversely, if we find one structure in which $phi$ is not true then we now that $phi$ is not valid.



    Analogously, to prove that




    • an existentially quantified object-language formula ($exists x ...$) is true in a structure or

    • a universally quantified object-language formula ($forall x ...$) is false in a structure,


    it suffices to find one element in the structure's domain that that provides an example for the existentially quantified formula or, respectively, a counterexample to the universal quantification and we're done.




However,





  • to prove that




    • a universally quantifed semantic meta-statement is true (For example $vDash phi$, "The
      formula $phi$ is valid", i.e. "All structures satisfy $phi$), or

    • an existentially quantified semantic meta-statement is false (For example "The
      formula $phi$ is unsatisfiable", i.e. "There exists no model of $phi$),


    we are suddenly faced with the difficult task of making a claim about all possible structures. We can not simply list them, as there are ininitely many of them, so all we can do is write a natural-language text arguing over the possible truth values of formulas eventually showing that all structures must succeed or fail to fulfill a certain requirement.



    Analogously, to prove that




    • an universally quantified object-language formula ($forall x ...$) is true in a structure or

    • an existentially quantified object-language formula ($exists x ...$) is false in a structure,


    we have to iterate over all the elements in the structure's domain. If the domain is finite, we are lucky and can simply go through all of the possible values (although this may take quite some time if the domain is large enough), but if it is infinite, there is no way we can ever get done if we just brute force check the formula for the elements one after the other.




This is a rather unpleasant situation, and exactly the point where syntactic proofs come in very handy.



Recall the definition of entailment:




$Gamma vDash phi$ iff all interpretations that satisfy all formulas in $Gamma$ also satisfy $phi$




or equivalently




$Gamma vDash phi$ iff there is no interpretation that satisfies all formulas in $Gamma$ but not $phi$.




This is precisely the kind of universal quantification that makes purely semantic proofs difficult: We would have to establish a proof over the infinity of all the possible structures to check whether the semantic entailment relation does or does not hold.

But now look at the definition of syntactic derivability:




$Gamma vdash phi$ iff there is a derivation with premises from $Gamma$ and conclusion $phi$.




The nasty universal quantifier suddenly became an existential one! Rather than having to argue over all the possible structures, it now suffices to show just one syntactic derivation and we're done. (The same applies to the case where we don't have any premises, i.e. $vDash phi$ ("$phi$ is valid" = "true in all structures" vs. $vdash phi$ (= "$phi$ is derivable" = "there is a derivation with no open assumptions and $phi$ as the conclusion). This is an enormous advantage -- call it "superior" if you want.



Now we have a kind of disparity: For some things semantics is hard whereas syntax is easy, so how can we use this disparity for good?

Luckily, in the case of classical logic, we are equipped with soundness and completeness:




Soundness: If $Gamma vdash phi$, then $Gamma vDash phi$ -- if we found a syntactic derivation, then we know that the entailment holds semantically.



Completeness: If $Gamma vDash phi$, then $Gamma vdash phi$ -- if a semantic entailment holds, then we can find a syntactic derivation.




While any reasonable derivation system will be sound w.r.t. to the language's semantics, completeness is a non-trivial and very powerful result: If we want to prove a semantic entailment, by completeness we know that there must be a syntactic derivation, so we can go find just one such derivation, and as soon as we did, soundness ensures us that this is indeed a proof that the entailment holds semantically. Hence, we can use syntactic proofs to avoid cumbersome semantic arguments that involve meta-logical quantification over all structures. This is pretty neat.



Now note how things turn around for the syntactic calculus:




  • To prove that


    • a universally quantifed syntactic meta-statement is true or an existentially quantified semantic meta-statement is false (For example $not vdash phi$, "The
      formula $phi$ is underivable", i.e. "There is no derivation with conclusion $phi$"/"All attempts to find a derivation with conclusion $phi$ eventually fail)




we would have to argue over all possible syntactic proofs, which can again be difficult.



Now we can apply the soundness and completness results in the other direction: If we want to show that a formula is not derivable, then by contraposition on completeness we know it is not valid (because if it were, then by completeness there would be a derivation), so we can carry out a semantic proof by providing just one countermodel to the validity of $phi$ and we're almost done; beause then again by contraposition on soundness, we can be sure that if the formula is not valid, there will be no derivation (because if there were a derivation for something that is not semantially valid, our system would be unsound), so we have our proof of the underivability without needing to argue over hypothetical derivations that can't exist.



And this is precisely how the aforementioned duality comes about:



--------------------------------------------------------------------------------
semantic syntactic
--------------------------------------------------------------------------------
positive ⊨ ⊢
universal quantif. existential quantif.
("all structures"/ ("there is a derivation"/
"no structure such that not") "not all derivations fail")
=> difficult 🙁 => easy 🙂

negated ⊭ ⊬
negated universal quantif. negated existential quantif.
("not all structures"/ ("there is no syntatic proof"/
"there exists a countermodel") "all attempts at proofs fail")
=> easy 🙂 => difficult 🙁
--------------------------------------------------------------------------------


Thanks to soundness and completness, the duality of semantic and a syntactic proofs can help us bridge the difficult parts:





  • $vDash$ ("all structures" -- difficult) $ xrightarrow{text{completeness}} $
    $vdash$ ("some derivation" -- easy) $ xrightarrow{text{soundness}} $ $vDash$


  • $not vdash$ ("no derivation" -- difficult) $ xrightarrow{text{contrapos. completeness}} $ $not vDash$ ("some countermodel" -- easy) $ xrightarrow{text{contrapos. soundness}} $ $not vdash$


Puting these briges into the picture from before:



------------------------------------------------------------------------------
semantic syntactic
------------------------------------------------------------------------------

completeness
------------->
positive 🙁 ⊨ ⊢ 🙂
<-------------
soundness

contrapos. completeness
<-----------------------
negated 🙂 ⊭ ⊬ 🙁
----------------------->
contrapos. soundness
------------------------------------------------------------------------------


I think the existence of syntactic calculi already is wonderful enough simply for the mathematical beauty of this symmetry.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited 57 mins ago

























answered 7 hours ago









lemontreelemontree

2,3301 gold badge7 silver badges23 bronze badges




2,3301 gold badge7 silver badges23 bronze badges















  • $begingroup$
    This is all I could have hoped for and more! Thank you very much for the time you took to answer this!
    $endgroup$
    – Threnody
    2 hours ago










  • $begingroup$
    If I may ask for clarification: Γ⊨ϕ iff all interpretations that satisfy all formulas in Γ also satisfy ϕ By interpretation do you mean assignment? I am still new to the language used in these subjects
    $endgroup$
    – Threnody
    2 hours ago












  • $begingroup$
    @Threnody This was intentionally left unspecific because the underlying idea is the same for both propositional and first-order logic (but I probably should have worded it more clearly) - in the case of propositional logic, interpretations will be assignments, and in the case of first-order logic, interpretations will be structures.
    $endgroup$
    – lemontree
    2 hours ago


















  • $begingroup$
    This is all I could have hoped for and more! Thank you very much for the time you took to answer this!
    $endgroup$
    – Threnody
    2 hours ago










  • $begingroup$
    If I may ask for clarification: Γ⊨ϕ iff all interpretations that satisfy all formulas in Γ also satisfy ϕ By interpretation do you mean assignment? I am still new to the language used in these subjects
    $endgroup$
    – Threnody
    2 hours ago












  • $begingroup$
    @Threnody This was intentionally left unspecific because the underlying idea is the same for both propositional and first-order logic (but I probably should have worded it more clearly) - in the case of propositional logic, interpretations will be assignments, and in the case of first-order logic, interpretations will be structures.
    $endgroup$
    – lemontree
    2 hours ago
















$begingroup$
This is all I could have hoped for and more! Thank you very much for the time you took to answer this!
$endgroup$
– Threnody
2 hours ago




$begingroup$
This is all I could have hoped for and more! Thank you very much for the time you took to answer this!
$endgroup$
– Threnody
2 hours ago












$begingroup$
If I may ask for clarification: Γ⊨ϕ iff all interpretations that satisfy all formulas in Γ also satisfy ϕ By interpretation do you mean assignment? I am still new to the language used in these subjects
$endgroup$
– Threnody
2 hours ago






$begingroup$
If I may ask for clarification: Γ⊨ϕ iff all interpretations that satisfy all formulas in Γ also satisfy ϕ By interpretation do you mean assignment? I am still new to the language used in these subjects
$endgroup$
– Threnody
2 hours ago














$begingroup$
@Threnody This was intentionally left unspecific because the underlying idea is the same for both propositional and first-order logic (but I probably should have worded it more clearly) - in the case of propositional logic, interpretations will be assignments, and in the case of first-order logic, interpretations will be structures.
$endgroup$
– lemontree
2 hours ago




$begingroup$
@Threnody This was intentionally left unspecific because the underlying idea is the same for both propositional and first-order logic (but I probably should have worded it more clearly) - in the case of propositional logic, interpretations will be assignments, and in the case of first-order logic, interpretations will be structures.
$endgroup$
– lemontree
2 hours ago













4












$begingroup$

The good news is that for first-order logic we have the soundness and completeness theorems: For any first-order theory $T$, and any sentence $sigma$ in the same language, $T models sigma$ if and only if $T vdash sigma$. That is, semantic and syntactic truth are equivalent.



In light of that theorem, you could, if you wish, focus entirely on semantic truth (as is sometimes done by model theorists) or entirely on syntactic truth (as is sometimes done by proof theorists). Each area has its own questions and techniques, but they are closely linked, so there is also opportunity for mixing the two.



One direct consequence of the completeness theorem is the compactness theorem, which is absolutely central to first-order model theory. It says that if every finite subset of some theory $T$ has a model, then $T$ itself has a model. You can prove this by observing that if $T$ had no model then you could write a proof of a contradiction from $T$. Since every proof is finite, only finitely many statements from $T$ are needed in your proof, so there must be a finite subset of $T$ that proves a contradiction, and hence has no models. (It's possible to prove compactness by methods that feel more semantic, like ultraproducts, but the proof sketched here is the first one that many people see).






share|cite|improve this answer









$endgroup$











  • 1




    $begingroup$
    What 's the point of talking about compactness theorem with respect to OP's question?
    $endgroup$
    – Ruggiero Rilievi
    8 hours ago










  • $begingroup$
    @RuggieroRilievi It gives an application of syntactic methods to semantics. Of course, you don't need to prove compactness via completeness. There's the ultraproduct-based proof. Moreover, there is also a Henkin-style proof of compactness which seems to be folklore - replace the provability relation $vdash$ with the finitary entailment relation given by $Gammamodels_{fin}varphi$ iff $Gamma'modelsvarphi$ for some finite $Gamma'subseteqGamma$, and run the argument as usual. Personally, I love this Henkin argument since it can let you "backsolve" for a sound and complete proof system.
    $endgroup$
    – Noah Schweber
    8 hours ago










  • $begingroup$
    @NoahSchweber Do you have a reference for that Henkin argument? I'm personally a fan of ultraproducts (mostly because of their topological interpretation), but I'd be very interested to see the proof you're describing.
    $endgroup$
    – Chris Eagle
    8 hours ago










  • $begingroup$
    @ChrisEagle I've never actually seen it written up, sadly, and it's a bit long for a comment here. But it really is just a quick find-and-replace. You extend the language/theory as usual (but with $models_{fin}$ in place of $vdash$), and then look at the structure consisting of equivalence classes of terms modulo finitary-entailment-equivalence. Personally my favorite part about it is that it suggests the idea of looking at generalizations of term models, where we replace provability modulo a theory with some other consequence relation.
    $endgroup$
    – Noah Schweber
    8 hours ago










  • $begingroup$
    If you ask a question here, though, I'd be happy to write it up in some detail - it really should appear more explicitly.
    $endgroup$
    – Noah Schweber
    8 hours ago
















4












$begingroup$

The good news is that for first-order logic we have the soundness and completeness theorems: For any first-order theory $T$, and any sentence $sigma$ in the same language, $T models sigma$ if and only if $T vdash sigma$. That is, semantic and syntactic truth are equivalent.



In light of that theorem, you could, if you wish, focus entirely on semantic truth (as is sometimes done by model theorists) or entirely on syntactic truth (as is sometimes done by proof theorists). Each area has its own questions and techniques, but they are closely linked, so there is also opportunity for mixing the two.



One direct consequence of the completeness theorem is the compactness theorem, which is absolutely central to first-order model theory. It says that if every finite subset of some theory $T$ has a model, then $T$ itself has a model. You can prove this by observing that if $T$ had no model then you could write a proof of a contradiction from $T$. Since every proof is finite, only finitely many statements from $T$ are needed in your proof, so there must be a finite subset of $T$ that proves a contradiction, and hence has no models. (It's possible to prove compactness by methods that feel more semantic, like ultraproducts, but the proof sketched here is the first one that many people see).






share|cite|improve this answer









$endgroup$











  • 1




    $begingroup$
    What 's the point of talking about compactness theorem with respect to OP's question?
    $endgroup$
    – Ruggiero Rilievi
    8 hours ago










  • $begingroup$
    @RuggieroRilievi It gives an application of syntactic methods to semantics. Of course, you don't need to prove compactness via completeness. There's the ultraproduct-based proof. Moreover, there is also a Henkin-style proof of compactness which seems to be folklore - replace the provability relation $vdash$ with the finitary entailment relation given by $Gammamodels_{fin}varphi$ iff $Gamma'modelsvarphi$ for some finite $Gamma'subseteqGamma$, and run the argument as usual. Personally, I love this Henkin argument since it can let you "backsolve" for a sound and complete proof system.
    $endgroup$
    – Noah Schweber
    8 hours ago










  • $begingroup$
    @NoahSchweber Do you have a reference for that Henkin argument? I'm personally a fan of ultraproducts (mostly because of their topological interpretation), but I'd be very interested to see the proof you're describing.
    $endgroup$
    – Chris Eagle
    8 hours ago










  • $begingroup$
    @ChrisEagle I've never actually seen it written up, sadly, and it's a bit long for a comment here. But it really is just a quick find-and-replace. You extend the language/theory as usual (but with $models_{fin}$ in place of $vdash$), and then look at the structure consisting of equivalence classes of terms modulo finitary-entailment-equivalence. Personally my favorite part about it is that it suggests the idea of looking at generalizations of term models, where we replace provability modulo a theory with some other consequence relation.
    $endgroup$
    – Noah Schweber
    8 hours ago










  • $begingroup$
    If you ask a question here, though, I'd be happy to write it up in some detail - it really should appear more explicitly.
    $endgroup$
    – Noah Schweber
    8 hours ago














4












4








4





$begingroup$

The good news is that for first-order logic we have the soundness and completeness theorems: For any first-order theory $T$, and any sentence $sigma$ in the same language, $T models sigma$ if and only if $T vdash sigma$. That is, semantic and syntactic truth are equivalent.



In light of that theorem, you could, if you wish, focus entirely on semantic truth (as is sometimes done by model theorists) or entirely on syntactic truth (as is sometimes done by proof theorists). Each area has its own questions and techniques, but they are closely linked, so there is also opportunity for mixing the two.



One direct consequence of the completeness theorem is the compactness theorem, which is absolutely central to first-order model theory. It says that if every finite subset of some theory $T$ has a model, then $T$ itself has a model. You can prove this by observing that if $T$ had no model then you could write a proof of a contradiction from $T$. Since every proof is finite, only finitely many statements from $T$ are needed in your proof, so there must be a finite subset of $T$ that proves a contradiction, and hence has no models. (It's possible to prove compactness by methods that feel more semantic, like ultraproducts, but the proof sketched here is the first one that many people see).






share|cite|improve this answer









$endgroup$



The good news is that for first-order logic we have the soundness and completeness theorems: For any first-order theory $T$, and any sentence $sigma$ in the same language, $T models sigma$ if and only if $T vdash sigma$. That is, semantic and syntactic truth are equivalent.



In light of that theorem, you could, if you wish, focus entirely on semantic truth (as is sometimes done by model theorists) or entirely on syntactic truth (as is sometimes done by proof theorists). Each area has its own questions and techniques, but they are closely linked, so there is also opportunity for mixing the two.



One direct consequence of the completeness theorem is the compactness theorem, which is absolutely central to first-order model theory. It says that if every finite subset of some theory $T$ has a model, then $T$ itself has a model. You can prove this by observing that if $T$ had no model then you could write a proof of a contradiction from $T$. Since every proof is finite, only finitely many statements from $T$ are needed in your proof, so there must be a finite subset of $T$ that proves a contradiction, and hence has no models. (It's possible to prove compactness by methods that feel more semantic, like ultraproducts, but the proof sketched here is the first one that many people see).







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered 8 hours ago









Chris EagleChris Eagle

4388 bronze badges




4388 bronze badges











  • 1




    $begingroup$
    What 's the point of talking about compactness theorem with respect to OP's question?
    $endgroup$
    – Ruggiero Rilievi
    8 hours ago










  • $begingroup$
    @RuggieroRilievi It gives an application of syntactic methods to semantics. Of course, you don't need to prove compactness via completeness. There's the ultraproduct-based proof. Moreover, there is also a Henkin-style proof of compactness which seems to be folklore - replace the provability relation $vdash$ with the finitary entailment relation given by $Gammamodels_{fin}varphi$ iff $Gamma'modelsvarphi$ for some finite $Gamma'subseteqGamma$, and run the argument as usual. Personally, I love this Henkin argument since it can let you "backsolve" for a sound and complete proof system.
    $endgroup$
    – Noah Schweber
    8 hours ago










  • $begingroup$
    @NoahSchweber Do you have a reference for that Henkin argument? I'm personally a fan of ultraproducts (mostly because of their topological interpretation), but I'd be very interested to see the proof you're describing.
    $endgroup$
    – Chris Eagle
    8 hours ago










  • $begingroup$
    @ChrisEagle I've never actually seen it written up, sadly, and it's a bit long for a comment here. But it really is just a quick find-and-replace. You extend the language/theory as usual (but with $models_{fin}$ in place of $vdash$), and then look at the structure consisting of equivalence classes of terms modulo finitary-entailment-equivalence. Personally my favorite part about it is that it suggests the idea of looking at generalizations of term models, where we replace provability modulo a theory with some other consequence relation.
    $endgroup$
    – Noah Schweber
    8 hours ago










  • $begingroup$
    If you ask a question here, though, I'd be happy to write it up in some detail - it really should appear more explicitly.
    $endgroup$
    – Noah Schweber
    8 hours ago














  • 1




    $begingroup$
    What 's the point of talking about compactness theorem with respect to OP's question?
    $endgroup$
    – Ruggiero Rilievi
    8 hours ago










  • $begingroup$
    @RuggieroRilievi It gives an application of syntactic methods to semantics. Of course, you don't need to prove compactness via completeness. There's the ultraproduct-based proof. Moreover, there is also a Henkin-style proof of compactness which seems to be folklore - replace the provability relation $vdash$ with the finitary entailment relation given by $Gammamodels_{fin}varphi$ iff $Gamma'modelsvarphi$ for some finite $Gamma'subseteqGamma$, and run the argument as usual. Personally, I love this Henkin argument since it can let you "backsolve" for a sound and complete proof system.
    $endgroup$
    – Noah Schweber
    8 hours ago










  • $begingroup$
    @NoahSchweber Do you have a reference for that Henkin argument? I'm personally a fan of ultraproducts (mostly because of their topological interpretation), but I'd be very interested to see the proof you're describing.
    $endgroup$
    – Chris Eagle
    8 hours ago










  • $begingroup$
    @ChrisEagle I've never actually seen it written up, sadly, and it's a bit long for a comment here. But it really is just a quick find-and-replace. You extend the language/theory as usual (but with $models_{fin}$ in place of $vdash$), and then look at the structure consisting of equivalence classes of terms modulo finitary-entailment-equivalence. Personally my favorite part about it is that it suggests the idea of looking at generalizations of term models, where we replace provability modulo a theory with some other consequence relation.
    $endgroup$
    – Noah Schweber
    8 hours ago










  • $begingroup$
    If you ask a question here, though, I'd be happy to write it up in some detail - it really should appear more explicitly.
    $endgroup$
    – Noah Schweber
    8 hours ago








1




1




$begingroup$
What 's the point of talking about compactness theorem with respect to OP's question?
$endgroup$
– Ruggiero Rilievi
8 hours ago




$begingroup$
What 's the point of talking about compactness theorem with respect to OP's question?
$endgroup$
– Ruggiero Rilievi
8 hours ago












$begingroup$
@RuggieroRilievi It gives an application of syntactic methods to semantics. Of course, you don't need to prove compactness via completeness. There's the ultraproduct-based proof. Moreover, there is also a Henkin-style proof of compactness which seems to be folklore - replace the provability relation $vdash$ with the finitary entailment relation given by $Gammamodels_{fin}varphi$ iff $Gamma'modelsvarphi$ for some finite $Gamma'subseteqGamma$, and run the argument as usual. Personally, I love this Henkin argument since it can let you "backsolve" for a sound and complete proof system.
$endgroup$
– Noah Schweber
8 hours ago




$begingroup$
@RuggieroRilievi It gives an application of syntactic methods to semantics. Of course, you don't need to prove compactness via completeness. There's the ultraproduct-based proof. Moreover, there is also a Henkin-style proof of compactness which seems to be folklore - replace the provability relation $vdash$ with the finitary entailment relation given by $Gammamodels_{fin}varphi$ iff $Gamma'modelsvarphi$ for some finite $Gamma'subseteqGamma$, and run the argument as usual. Personally, I love this Henkin argument since it can let you "backsolve" for a sound and complete proof system.
$endgroup$
– Noah Schweber
8 hours ago












$begingroup$
@NoahSchweber Do you have a reference for that Henkin argument? I'm personally a fan of ultraproducts (mostly because of their topological interpretation), but I'd be very interested to see the proof you're describing.
$endgroup$
– Chris Eagle
8 hours ago




$begingroup$
@NoahSchweber Do you have a reference for that Henkin argument? I'm personally a fan of ultraproducts (mostly because of their topological interpretation), but I'd be very interested to see the proof you're describing.
$endgroup$
– Chris Eagle
8 hours ago












$begingroup$
@ChrisEagle I've never actually seen it written up, sadly, and it's a bit long for a comment here. But it really is just a quick find-and-replace. You extend the language/theory as usual (but with $models_{fin}$ in place of $vdash$), and then look at the structure consisting of equivalence classes of terms modulo finitary-entailment-equivalence. Personally my favorite part about it is that it suggests the idea of looking at generalizations of term models, where we replace provability modulo a theory with some other consequence relation.
$endgroup$
– Noah Schweber
8 hours ago




$begingroup$
@ChrisEagle I've never actually seen it written up, sadly, and it's a bit long for a comment here. But it really is just a quick find-and-replace. You extend the language/theory as usual (but with $models_{fin}$ in place of $vdash$), and then look at the structure consisting of equivalence classes of terms modulo finitary-entailment-equivalence. Personally my favorite part about it is that it suggests the idea of looking at generalizations of term models, where we replace provability modulo a theory with some other consequence relation.
$endgroup$
– Noah Schweber
8 hours ago












$begingroup$
If you ask a question here, though, I'd be happy to write it up in some detail - it really should appear more explicitly.
$endgroup$
– Noah Schweber
8 hours ago




$begingroup$
If you ask a question here, though, I'd be happy to write it up in some detail - it really should appear more explicitly.
$endgroup$
– Noah Schweber
8 hours ago










Threnody is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















Threnody is a new contributor. Be nice, and check out our Code of Conduct.













Threnody is a new contributor. Be nice, and check out our Code of Conduct.












Threnody is a new contributor. Be nice, and check out our Code of Conduct.
















Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3318422%2fwhy-should-we-care-about-syntactic-proofs-if-we-can-show-semantically-that-state%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Taj Mahal Inhaltsverzeichnis Aufbau | Geschichte | 350-Jahr-Feier | Heutige Bedeutung | Siehe auch |...

Baia Sprie Cuprins Etimologie | Istorie | Demografie | Politică și administrație | Arii naturale...

Nicolae Petrescu-Găină Cuprins Biografie | Opera | In memoriam | Varia | Controverse, incertitudini...