Tactics
This page provides an index of the tactics we use in CS 22. It will update over the course of the semester as new tactics are introduced.
Introduction & Elimination Rules
apply
We generally recommend using have
, not apply
, for implication elimination.
apply h
makes use of some hypothesis h
(this can be any named hypothesis in your context) and tries to match the current goal against the conclusion (consequent) of h
. If it does, then it updates the goal to be the antecedent of h
. Note that apply
always requires an argument: writing just apply
in your proof will produce a syntax error.
For example, if h : p → q
, and the goal is q
, then apply h
changes the goal to p
. In other words, apply
uses h
to "reason" that in order to show q
, it suffices to show p
.
Note that the proposition q
could be complex: as long as it matches the consequent of h
, then apply h
will succeed. Be sure to be on the lookout for an if-then hypothesis whose consequent matches the conclusion—if its antecedent looks easier to prove than the current goal, then apply
is likely a good tactic to try.
assume
The assume
tactic applies the introduction rule for implication: given a goal of the form P → Q
(where P
and Q
could be any propositions), writing assume hp
will add the hypothesis hp : P
to your context and update your goal to be Q
. (Recall that this is how we prove implications: we assume the antecedent, then, from that assumption, show the consequent.) On paper, the step taken by assume hp
above would probably appear as something like "assume that P is true" or simply "suppose P." Note that names of hypotheses should begin with h
(unless they are an inductive hypothesis in an induction proof, in which case they may begin with ih
) per our naming conventions.
We also use assume
to prove negations via proof by contradiction.1 Given a goal of the form ¬P
, executing assume hp
will add the hypothesis hp : P
to your context and update your goal to be False
. In other words, we want to show that supposing P
to be true entails an absurd state of affairs (where "an absurd state of affairs" is formalized as "being able to prove False
, something we should not be able to prove"), so that P
must be false. This mirrors how we prove negations on paper: to show that "P is false," we'll usually write something like, "Assume for the sake of contradiction that P is true. Then [more steps here]. But now we've reached a contradiction, so P must be false." (Notice that we don't explicitly say that we've "proved False;" instead, this is implicit in our arrival at a contradiction. Similarly, in order to prove False
in Lean, you'll want to obtain a context that's contradictory, then use the contradiction
tactic.)
assumption
If a hypothesis in your context matches the goal exactly, assumption
will solve the goal. This tactic can only be used to close a goal, i.e., as the last step in a proof of a given goal. It roughly corresponds to a phrase like "as we've already assumed/shown" in a written proof.
contradiction
The contradiction
tactic applies the principle of explosion (sometimes also known by the Latin ex falso quodlibet—roughly, "from falsehood, anything [follows]"): if your context contains "obviously" impossible or contradictory claims, then contradiction
can prove any goal (even one that's false). (This step is sometimes implicit in a paper proof, as when one considers some situation, shows that it entails a contradiction, and then deduces that the situation "is impossible"—implicitly discharging one's proof obligation—without explicitly invoking "explosion.") Note that this is implicitly using the elimination rule for falsehood: from contradictory claims, we can derive False
; the elimination rule for False
then allows us to prove anything. (contradiction
combines all of this into a single step.)
Examples of contradictory contexts include any context in which you have hypotheses P
and ¬P
for some proposition P
, as well as hypotheses like 0 = 1
and other definitional impossibilities. Note that contradiction
cannot do sophisticated reasoning to deduce a contradiction from existing hypotheses: for instance, if you have hypotheses a > b
and a ≤ b
, contradiction
will not close the goal (since a ≤ b
is not syntactically the negation of a > b
, even though they are logically equivalent).
eliminate
eliminate
applies elimination rules to existing hypotheses in your context. (Recall that elimination rules tell us how we can "use" or "destruct" an existing piece of knowledge of a given form.) The syntax of eliminate
is eliminate h with ...
where h
is the hypothesis you want to "destruct" and the ...
is filled in with names to be assigned to any variables or hypotheses generated by the elimination rule (the number and kind of names you supply here is determined by which elimination rule is being applied). Here is a list of common elimination rules that eliminate
can apply:
- Conjunction elimination: given a hypothesis
h : p ∧ q
,eliminate h with hp hq
will "split apart"h
into hypotheseshp : p
andhq : q
in your context. - Disjunction elimination ("proof by cases"): given a hypothesis
h : p ∨ q
,eliminate h with h1 h2
generates two goals, one with a hypothesishp : p
and one with a hypothesishq : q
. You'll then need to separately show that your original goal holds in each case. - Existential-quantification elimination ("extracting a witness"): if
T
is a type (likeℕ
orℤ
) andP
is a predicate, then given a hypothesishex : ∃ x : T, P x
,eliminate h with w hw
will replacehex
in your context with a valuew : T
and a hypothesishw : P w
. Intuitively,hex
says that there's some value out there with propertyP
, but you don't know what it is; the elimination rule applied byeliminate
allows you to "get your hands on" a specific such valuew
(wherehw
reflects the fact thatP
does in fact hold ofw
).
Note that eliminate
is not used for implication elimination or universal-quantification elimination: for those rules, see the have
tactic.
existsi
existsi
applies the introduction rule for the existential quantifier. That is, it allows you to specify a witness for which to prove an existentially-quantified goal: if T
is some type (like ℤ
or ℕ
) and P
is a predicate, your goal is of the form ∃ x : T, P x
, and you have some value n : T
, then existsi n
will update your goal to be P n
. (Note that n
need not be a variable; it could also be some complex expression, like 2 * c + 4
.) In a paper proof, this corresponds to reasoning of the form "we need to show that there exists some number with this property, so we'll show that n
is such a number." Intuitively, existsi
is used when your goal is asking you to prove that there exists some value with a given property, and existsi
works by specifying which particular value you're going to show that property holds of.
Note that existsi
always requires an argument—you should always write existsi arg
for some expression arg
; if you write just existsi
as a standalone line in your proof, you will get a syntax error.
fix
The fix
tactic applies the introduction rule for the universal quantifier: given a goal of the form ∀ x : T, P x
, executing fix x
will add a variable x : T
to your context and update the goal to be P x
. Note that you can pick any name you'd like for the variable you introduce (subject to our naming conventions; it need not match the name used by the universal quantifier. (For instance, in the above example, I could execute fix n
to add a variable n : T
to my context and update my goal to P n
.)
The fix
tactic is analogous to the following kind of reasoning in a paper proof: "We need to prove that property P holds for any number. To do so, we let n be an arbitrary number and show that P holds of n." In an actual written proof, this might simply appear as "let n be an arbitrary natural number" or "fix an integer k."
have
The have
tactic allows you to add new hypotheses to your context. There are two main usages we'll see in this course:
-
have
can be used to apply the elimination rules for universal quantification and implication, adding the resulting hypothesis to the context.The syntax for implication elimination (also known as modus ponens) is as follows: if your context contains hypotheses
hpq : P → Q
andhp : P
, thenhave hq : Q := hpq hp
will add the hypothesishq : Q
to your context. (On paper, this is analogous to writing "we know that P implies Q, and we also know P, so we can conclude Q by modus ponens.")The syntax for universal-quantification elimination (which corresponds to "instantiating" a universal claim at some particular value) is similar: if
T
is a type (likeℕ
orℤ
) and you have in your context a hypothesishp : ∀ n : T, P n
and a valuek : T
, thenhave hk : P k := hp k
adds the hypothesishk : P k
to your context. (This corresponds to an argument like, "we know that property P holds of any (rational/natural/integer) number, so since k is a (rational/natural/integer) number, we know P holds of k.") -
have
can also be used to create a new goal that, once discharged, will be added as a hypothesis to your context. This is a useful way of proving an ancillary result that you want to make use of later in your primary proof. The syntax for this is as follows: writehave hp : P
(note the lack of a:=
sign); this will create a new topmost goalP
. Then use the{ }
syntax to focus that goal and prove it. Once you've finished (i.e., after your closing}
), you'll now have a new hypothesishp : P
in your context.
left
and right
left
and right
apply the introduction rules for disjunction ("or"). Intuitively, they are used to "target" one side of a disjunctive (∨
) proposition in the goal. When presented with a disjunction in your goal, your first instinct should generally be to use left
or right
to isolate one of the two propositions, then prove that selected goal.
For example, given a goal of p ∨ q
, left
would make the goal p
, while right
would make the goal q
. This corresponds in a paper proof to reasoning along the lines of, "we need to show that either P holds or Q holds; we'll show that P [or, respectively, Q] holds." When deciding which of left
or right
to use, look at the two sides of your goal and what hypotheses you have in your context: if one side looks easier to prove than the other, try to prove that side!
reflexivity
The reflexivity
tactic serves as a sort of "introduction rule" for reflexive relations. (A reflexive relation is one that relates any value in its domain to itself.)
The most important such relation is equality: reflexivity
proves that anything is equal to itself. More formally, if you have a goal of the form a = a
, then reflexivity
can prove that goal using the reflexivity of equality. Note that a
could be a complex expression involving numbers, variables, and functions: as long as it's the same on both sides of the equals sign, reflexivity
will work!
As noted, reflexivty
isn't restricted to just equality. If you have a goal of the form a ★ a
, where a
is any expression and ★
is some reflexive relation, then reflexivity
will close the goal. Examples of such relations include ≤
and ≥
: for example, n ≥ n
and 1 + x + gcd x y ≤ 1 + x + gcd x y
are both provable by reflexivity
.
split_goal
The split_goal
tactic applies the introduction rule for conjunction: if your goal is of the form p ∧ q
, then split_goal
will turn that into two separate goals p
and q
. split_goal
also applies the introduction rule for bi-implication, turning a goal of p ↔ q
into two goals p → q
and q → p
. (You can think of bi-implication as behaving like the conjunction of these two implications; thus, we use the same tactic for its introduction rule.) split_goal
does not apply introduction rules for other connectives (such as disjunction or implication) because those connectives' introduction rules don't correspond to "splitting the goal!"
Working with Numbers
linarith
The linarith
tactic (short for "linear arithmetic;" don't worry about what this means) can prove goals that involve equalities or inequalities of numerical expressions (which can include variables!). It's a bit like positivity
and numbers
, but unlike the former, it can show inequalities that don't involve 0
, and unlike the latter, it can handle expressions involving variables. It can also derive a contradiction from hypotheses that involve impossible equalities or inequalities (for instance, if you have x > 0
and x ≤ 0
hypotheses in your context).
linarith
is a powerful tactic: if your goal is an equality or inequality without any connectives or quantifiers (like x + y = 3
or 2 * x + y ≤ 4 - z
), it's a good tactic to try. Note that linarith
can use any hypotheses in your context as additional information to help it solve the goal, and sometimes adding additional facts to your context can help linarith
solve a goal that it's having trouble solving. One key caveat: you cannot use linarith
to reason about expressions in which variables are multiplied together: for instance, you won't be able to show something like x * y + 1 > 0
by linarith
.
As an example of its use, if x
and y
are integer-valued variables, and you know h1 : x < 10
and h2 : x + y < 20
, then linarith
can prove the goal 3*x + 2*y < 50
.
linear_combination
The linear_combination
tactic is produced by polyrith
. You should never need to produce a linear_combination
command yourself; this documentation is purely for those who are curious.
linear_combination
"adds together" equalities in your context in order to show an equality in your goal. As an example, suppose you have x y : ℤ
and hypotheses h1 : x + y = 1
and h2 : x - y = 3
. Then, to show the goal 2 * x = 4
, use linear_combination h1 + h2
, since on the left-hand side we have x + y + x - y
, which is just 2 * x
, and on the right-hand side we get 1 + 3
, which is 4
. To show x^2 + y^2 = x - 3*y
, use linear_combination x * h1 - y * h2
(we leave it to you to work out the algebra!).
numbers
The numbers
tactic can prove arithmetic statements about numerals (i.e., specific, concrete numbers like 22
or 15151
). Think of numbers
like a calculator: if your goal is a statement that you could demonstrate by doing computations with a handheld calculator, then numbers
can probably solve it. For instance, goals like 1900 + 51 = 1951
or 23 - 1 < 21 * (995 + 5) + 228
are solvable by numbers
. Since numbers
is designed to directly discharge a goal of this form, it should be the last tactic you use when proving a given goal (similar to linarith
and assumption
).
Note that numbers
can only prove things involving concrete numerals—it cannot prove goals that involve number-valued variables. For instance, goals like 3 + n = 1 + 2 + n
or 100 * (n + 1) < 200 * (n + 1)
(where you have in your context n : ℕ
) are not provable by numbers
because they involve a variable n
; for these types of goals, linarith
is the tactic to use.
polyrith
Like linarith
, the polyrith
tactic can prove equalities of numerical expressions (including those that contain variables). It is different from linarith
in two key respects: it can prove equalities that involve variables being multiplied together (like x*y = -2*y + 1
or x*z + 2*y*z = x*w + 2*y*w
), and it cannot prove any inequalities. Like linarith
, polyrith
can use any hypotheses in your context as additional information to help it solve the goal, and sometimes adding additional facts to your context can help polyrith
solve a goal that it's having trouble solving.
A unique feature of polyrith
is that it should never appear in your final proof. Instead, when you run polyrith
, it will produce output in your Infoview telling you a linear_combination
command with which you should replace polyrith
. The purpose of executing polyrith
is simply to generate the linear_combination
command; that tactic, not polyrith
, is what should appear in your final proof.
As an example of its use, if x
and y
are integer-valued variables, and you know h1 : x*y + 2*x = 1
and h2 : 3*x = y
, then polyrith
can prove the goal y*y + 6*x = 3
—specifically, it will tell you to use linear_combination 3 * h1 - y * h2
.
positivity
if your goal is to show something is positive or nonnegative (like 0 < x
, x ≥ 0
, 0 ≤ a²
, etc.) and this "obviously" follows from your hypotheses, write positivity
to finish that goal. This tactic knows basic facts like "natural numbers are nonnegative" and "the square of a positive number is positive." It does not know complicated arithmetic.
Rewriting and Simplifying
dsimp
The dsimp
tactic is used for definitional simplificiation: that is, it "unfolds" the meaning of a definition. You might do this in a paper proof in a setting like the following: "We want to show that a divides b, so by the definition of divisibility, we must find some integer c such that b = ac"—this sentence hasn't made any progress towards actually showing that such a c exists, but rather is spelling out for the reader what it means to show a divisibility claim. Similarly in Lean, perhaps your goal is of the form MyPred x
where MyPred
is some predicate defined elsewhere; you might not remember what exactly MyPred
means, and so you can use dsimp MyPred
to unfold the definition of that predicate and see what is actually required in order to show that MyPred
holds of x
.
The syntax of dsimp
is as follows: dsimp name
will unfold the definition of name
in your goal, while dsimp name at h
will unfold the definition of name
in the hypothesis h
. (For instance, if I had a hypothesis h : MyPred x
and executed dsimp MyPred at h
, the hypothesis h
would now show the underlying definition of MyPred x
.) You can also pass multiple names, as a bracketed comma-separated list, to dsimp
to unfold all of their definitions at once: for instance, dsimp [name1, name2, name3]
or dsimp [name1, name2, name3] at h
. Note that dsimp
should always take an argument in the form of a bracketed name or list of names to unfold; if you do not include such an argument (e.g., if you just write dsimp
as a line of your proof), you will get an error.
rewrite
The rewrite
tactic performs "substitution of equals" in expressions. This corresponds to reasoning of the form, "We want to prove something about a
, but since we know that a = b
, we can prove it about b
instead."
The syntax is as follows: letting a
and b
be arbitrary expressions (potentially complex ones; they need not just be variables), if you have a hypothesis hab : a = b
in your context, and you have a goal in which a
occurs, then rewrite hab
will change all occurrences of a
in your goal to b
. For instance, if your goal is a = c
, then rewrite hab
will change your goal to b = c
; if your goal is a + a = c
, then rewrite hab
will change it to b + b = c
.
Note that rewrite
will always substitute the expression on the right side of the supplied hypothesis in place of the expression on the left side (if we had a hypothesis hba : b = a
and tried to do rewrite hba
in the above examples, it would fail, because it would be trying to replace b
with a
instead of a
with b
). To tell rewrite
to rewrite in the "other direction" of the equality (i.e., substituting what appears on the left in place of what appears on the right), you must prefix the hypothesis in your rewrite with a left-arrow symbol ←
as follows: rewrite ←hba
. The ←
symbol can be entered by typing a backslash (\
) followed by a lowercase l
on your keyboard.
You can also use rewrite
to rewrite at another hypothesis rather than rewriting your goal. To do so, append at htarget
to the end of your rewrite command, where htarget
is the hypothesis you wish to rewrite. For instance, if you have a hypothesis hab : a = b
and another hypothesis hP : P a
(where P
is some predicate), and you'd instead like to have hP : P b
, this could be accomplished by executing rewrite hab at hP
. Notice the order here: hypotheses in the brackets are the equalities used to perform (or "justify") the rewrite, while the (single) hypothesis after the at
is the hypothesis being rewritten.
Finally, rewrite
allows you to rewrite with multiple hypotheses in sequence. This is useful if you need to perform multiple rewrites in one step of your proof, as in a paper proof when you write something like, "We want to prove something about a
and c
, but since a = b
and c = d
, we can instead prove it about b
and d
." To do this, enter the hypotheses with which you wish to rewrite as a comma-separated list between brackets: e.g., rewrite [hab, hbd]
. Any hypothesis in the list can be prefixed with a left arrow to indicate that it should be used in the backward direction as discussed in the paragraph above.
Set Theory
extensionality
The extensionality
tactic uses the principle of extensionality to rewrite a goal involving an equality of sets. This principle says that in order to show that S = T
where S
and T
are sets over a universe type 𝒰
(in 22, 𝒰
is typically ℤ
or ℕ
), it suffices to show that ∀ x : 𝒰, x ∈ S ↔ x ∈ T
. Accordingly, applying extensionality
to the goal S = T
will change the goal to ∀ x : 𝒰, x ∈ S ↔ x ∈ T
.
set_simplify
The set_simplify
tactic changes any hypotheses or goals expressed in terms of membership in sets formed by set operations to expressions involving logical connectives instead. For example, set_simplify
will change a goal or hypothesis of x ∈ Aᶜ
to ¬ (x ∈ A)
, and it will change a goal or hypothesis of x ∈ A ∩ B
to x ∈ A ∧ x ∈ B
. In a paper proof, the manipulations performed by set_simplify
correspond to "unpacking" the definitions of different set operations (e.g., to have x ∈ A ∩ B
is definitionally to have x ∈ A
and x ∈ B
).
Induction
basic_induction
The basic_induction
tactic performs standard ("weak") natural-number induction. If your current goal is of the form ∀ n : ℕ, P n
, where P
is some predicate on natural numbers, then basic_induction
will generate two goals:
- The base case, which consists of showing
P 0
, and - The inductive step, which requires you to show
∀ (n : ℕ), P n → P (n + 1)
.
You should use the { }
notation to focus each step of your induction in turn, just as you would clearly demarcate your base case and inductive step in a paper proof.
Note that you must use basic_induction
with a universally-quantified goal—do not introduce the quantified variable prior to invoking basic_induction
.
induction_from_starting_point
The induction_from_starting_point
tactic is similar to the basic_induction
tactic, except that it works specifically on goals of the form ∀ n ≥ c, P n
where c
is some natural-number expression (like 37
or 3 * k
or the like). Lean may sometimes render these goals equivalently as ∀ n : ℕ, n ≥ c → P n
(the former notation is merely shorthand for this).
When you invoke induction_from_starting_point
on such a goal, it will generate two goals: a base case (at value c
rather than 0
!) and an inductive step. Note that, unlike with basic_induction
, your goal for the inductive step will have an extra hypothesis n ≥ c
, giving a goal of the form ∀ (n : ℕ), n ≥ c → P n → P (n + 1)
. (It would be pointless to include such an extra hypothesis when doing basic_induction
, since the analogous hypothesis there would be n ≥ 0
, which is trivially true for all natural numbers n
.)
strong_induction
The strong_induction
tactic applies the principle of strong induction to a goal universally quantified over a natural number. It turns a goal of the form ∀ n : ℕ, P n
into a goal ∀ (n : ℕ), (∀ m < n, P m) → P n
. The antecedent of the implication is the strong inductive hypothesis. Notice that Lean's strong induction does not explicitly create any base cases: if you want to case on the value of n
, you'll need to do so manually.
Footnotes
-
Note that we are using this term slightly differently from what you may see in a logic class. ↩