Proving that solutions to incremental satisfiability problems are correct


Automated reasoning can be used to mathematically prove whether software or hardware will do what it’s supposed to. In practice, automated reasoning often relies on programs known as SAT solvers, which determine whether formal expressions describing the constraints on a system can be satisfied.

SAT is notoriously difficult (it is the original NP-complete problem), and SAT solvers use all kinds of clever tricks to make it tractable: popular SAT solvers have tens of thousands of lines of code. But how do we know the SAT solver’s decisions — about the satisfiability of a given expression — are reliable? The programs are large enough that using formal analysis to verify them would be an enormous effort.

An example of an unsatisfiable SAT problem, since the first two clauses ((xy) and (x ∨ ¬y)) are satisfiable only if x is true, whereas the final clause ((¬x)) requires x to be false.

One solution is for the SAT solver to generate a record — a trace — of its reasoning, which can be verified by an automatic proof checker. A proof checker is a comparatively simple program, which is much easier to verify than a SAT solver. And for SAT problems whose constraints can all be specified at once — even very, very complex SAT problems — there are methods for reliably generating machine-checkable proofs.

Unfortunately, in most practical situations, a SAT problem’s constraints can’t all be specified at once. Often, when we’re verifying code or hardware or network performance, we want to start by checking one constraint and, based on whether it applies or not, check a second constraint, and so on, building up our set of constraints one by one. Existing methods for generating checkable proofs don’t work with such incremental SAT problems.

Related content

CAV keynote lecture by the director of applied science for AWS Identity explains how AWS is making the power of automated reasoning available to all customers.

At this year’s conference on Formal Methods in Computer-Aided Design (FMCAD), we presented a method for generating checkable proofs for incremental SAT problems. A SAT problem consists of a long list of constraints, and the expression of each constraint is called a clause. To make SAT problems tractable, SAT solvers delete clauses that can be satisfied by the same truth assignments that satisfy some other clause.

With incremental SAT, a deleted clause sometimes needs to be restored, to ensure consistency as new constraints are added. In such cases, our approach to proof generation treats the restored clause as though it had never been deleted in the first place. This simple trick enables existing proof generation frameworks to generalize to incremental SAT. We explain in more detail below.

Incremental SAT

A SAT problem is a sequence of constraints expressed using variable names and the Boolean operators ∧ (and) and ∨ (or). The question is simply whether there’s some assignment of truth and falsity to the variables that makes the expression true. For instance, the expression (A B) (¬A ¬B) (read “(A or B) and (not-A or not-B)” is satisfiable, because it’s true if either A or B is true and the other is false. The expression has two clauses, (AB) and (¬A ∨ ¬B).

As the number of clauses increases, this seemingly straightforward problem becomes intractably difficult. One of the tricks SAT solvers use to simplify it is to delete a clause if its conjunction with a second clause is equisatisfiable with the second clause alone, where “equisatisfiable” means that two expressions are either both satisfiable or both unsatisfiable.

Related content

To mark the occasion of the eighth Federated Logic Conference (FloC), Amazon’s Byron Cook, Daniel Kröning, and Marijn Heule discussed automated reasoning’s prospects.

For example, consider an incremental SAT problem that includes the clauses (AB) and A ∨ ¬B) The solver might keep the first clause and delete the second because (A B) and the conjunction (AB) ∧ (¬A ∨ ¬B) are equisatisfiable. Then, because it’s an incremental problem, two new clauses, (A) and (B), are added. (AB) ∧ (A) ∧ (B) is satisfiable, because (AB) is true if both A and B are true. But (¬A ∨ ¬B) is false if both A and B are true, so it needs to be added back to the expression, or the SAT solver might give the wrong answer.

When a SAT solver working on an incremental SAT problem deletes a clause, it stores it in a buffer called the reconstruction stack, together with a truth-value assignment that ensures that we can reconstruct a valid assignment in the original problem from the solver-modified problem. When a new clause is added to the problem expression, if the truth-value required to satisfy it conflicts with any of the assignments in the reconstruction stack, the conflicting clauses are restored to the problem expression and re-evaluated. They may receive different truth-value assignments — or the solver may conclude that the expression is unsatisfiable.

Algorithmically, this procedure is effective: it ensures that the SAT solver’s verdict will be sound. But its logic is difficult to capture in the language of a formal proof. So while today’s SAT solvers can solve incremental SAT problems, they rarely try to prove that their solutions are sound.

Generating proofs

This is where our method comes in. In addition to deleting clauses from a problem expression, SAT solvers also add clauses. The additions are logically entailed by clauses already in the expression, so they don’t affect satisfiability, but they may make it easier for the solver to recognize potential conflicts between clauses.

Related content

Distributing proof search, reasoning about distributed systems, and automating regulatory compliance are just three fruitful research areas.

A typical proof generator steps through the trace of all these additions and deletions, building up a proof of their validity. Our method instead starts at the end of the trace and works backward. Where we find a step that restores a clause in the proof, we store that clause in a buffer; if we later (that is, earlier in the trace) find the deletion of the same clause, we simply delete both the original deletion and the subsequent restoration. Once we’ve cleaned up the trace from the bottom to the top, we work back through it from the top down, building a proof in the conventional way.

Since the deleted clauses are equisatisfiable with clauses remaining in the expression, their deletion has no effect on the validity of the ensuing proof steps — at least until the point of conflict with a newly added clause, where the deleted clause was added back anyway. So treating the deletions as if they never happened doesn’t compromise the soundness of the proof.

To evaluate the practicality of our approach, we modified one of the most popular current SAT solvers to implement it and tested it on a dataset of 300 incremental SAT problems, six of which are satisfiable and 294 of which are not. The modified solver produced valid proofs for all 294 unsatisfiable examples. (The six satisfiable examples are proven satisfiable by the choice of truth-value assignments.) Our algorithm was also efficient enough to be practical, taking around a minute to produce a one-gigabyte proof, or an overhead of about 5% relative to the solving time.





Source link

We will be happy to hear your thoughts

Leave a reply

Rockstary Reviews
Logo
Shopping cart