Blog

Blog

The Collatz conjecture

This afternoon, lecture on termination in my course Logic and Computation (we're using ACL2 as a logic and theorem proving engine, and ACL2 requires a termination proof to admit a definition as a new axiom). And this allowed me to use the Collatz conjecture as an example of the non-triviality of establishing termination even for simple recursive functions.

The Collatz conjecture, also known as the 3n + 1 conjecture, the Ulam conjecture, the Syracuse problem, the hailstone sequence, is named after the German mathematician Lothar Collatz, who proposed it in 1937. The conjecture is that for every positive natural number a[0], the sequence a[0], a[1], a[2], a[3],... given by:

  a[n+1] = 1            if a[n] is 1
  a[n+1] = a[n] / 2     if a[n] is even
  a[n+1] = 3a[n] + 1    if a[n] is odd

always eventually hits 1.

Casting this as a termination problem, of course, is just asking whether the function f given by

  f(1) = 1
  f(n) = f(n / 2)       if n is even
  f(n) = f(3n + 1)      if n is odd

terminates on all positive natural number inputs.

No one knows whether this is true or not. Experimental evidence indicates that yes, but that's almost meaningless. In particular, we know that conjecture holds for all inputs up to more than 10^16. A good overview can be found in Weisstein's MathWorld:

Because it is so simple to state, and is extremely easy to understand, the problem has been the focus of attention for many amateur mathematicians and number-dabblers. In fact, I believe it to be almost a rite of passage to spend some time playing with the conjecture when first going through a math degree. (Thankfully, my own flirtation with the problem amounted to wasting no more than two weeks trying to formalize the "pattern" I saw in the sequences produced. Ah, the folly of youth...)

Naturally, professional mathematicians have also looked at the problem, and have come up with a surprising number of connections with seemingly unrelated topics. Best known is Jeffrey Lagarias, now at U. of Michigan, but before that researcher at AT&T Bell Labs.

Quite readable is his paper giving a survey of the problem from a mathematical standpoint:

Also of interest are his annotated bibliographies on the problem:

Both can be found on the arXiv.

Now, I seem to remember someone establishing a connection between this conjecture and the distribution of prime numbers, but I have not been able to retrace this. If anyone knows what I am thinking of, please drop me a line.

Notices of the AMS for March

Well, between holiday lethargy (thanks Shawn Standefer), the beginning of the winter semester here at Northeastern, the CSF deadline a couple of weeks back and the upcoming UAI deadline, I have found precious little time to post since the beginning of the year. But I'm back, and ready to tackle the backlog.

I just received the March issue of the Notices of the AMS in the mail yesterday, and two articles caught my eye.

B. Poonen talks about "Undecidability in Number Theory", surveying the solution to Hilbert's Tenth problem by Davis, Putnam, Robinson, and Matiyasevich - coming up with the main theorem that a set is recursively enumerable if and only if it is diophantine (very roughly, it can be obtained as the solution to a diophantine equation). This amount, interestingly enough, to programming using diophantine equations. Applications of the result beyond Hilbert's Tenth's are discussed.

Also in the issue, J. Ewing talks about "Where Are Journals Headed? Why We Should Worry About Author-Pay", yet another article discussing the journal subscription prices crisis. The author makes the interesting point, which I had never thought of before, that mathematics (and CS) are hardly in any position to dictate policy, given that we have such a tiny proportion of the published journals. Biomedical sciences, with half the journals, drive the show. And the reason why this is important is that the publication model for biomedical sciences is quite different than that for the core sciences.

I will let you explore those articles in your copious free time. The issue is freely available:

Mathrev Review of Gillies "What might be the case after a change in view"

Several months ago I was asked to become a reviewer for the AMS's Mathematical Reviews. the load is light, and I come across papers that are interesting but might not otherwise read. I'll post the reviews here as they get done.

The first one is an article by Gillies on epistemic logic and belief revision. I could not find the paper online, but Gillies has slides for a talk with the same title:

And here's the review. (I'm hoping that the fonts will work - latex support on this wiki has not yet been enabled. - EDIT: the Greek font doesn't work...)

A. S. Gillies, ``What Might Be the Case after a Change in View'', Journal of Philosophical Logic, vol. 35, pp. 117--145, 2006.

Belief revision, the study of how agents ought to revise their beliefs in the light of new information, is applicable to problems in fields ranging from philosophy to artificial intelligence. This article examines a well known puzzle in belief revision, that involves an inconsistency resulting from two fairly unassuming desiderata for a theory of belief revision.

The first desideratum is that a change in view should always be a minimal change in view. This is expressed by a preservation property: if you do not believe ¬P in some state of belief B, then revising that state with P should result in a new state of belief B' that is at least as strong as (carries as many commitments as) B.

The second desideratum, if we further allow reasoning about what might and must be the case, is that agents have some amount of rational introspection with respect to belief. This is expressed by a reflectivity property: a state of belief commits an agent to believe that it might be the case that P if and only if it does not commit the agent to ¬P; formally, the property requires that every state of belief B to satisfy: if ¬P is in B, then Might(P) is in B, and if P is in B, then Must(P) is in B.

Both of these properties are quite natural, and have been argued in depth. Unfortunately, in the presence of two basic properties of belief revision with unimpeachable credentials, namely that revising a state of belief by P yields a new state of belief that includes P, and that revising a state of belief by some non-contradictory P should yield a consistent state of belief (where a state of belief is consistent if it does not contain both Q and ¬Q for any Q), these desiderata are not satisfiable.

This non-satisfiability was proved Fuhrmann in ``Reflective modalities and theory change'', Synthese 81, 115-134, 1989. More precisely, Fuhrmann proved that a theory of belief revision with the four properties above can only yield a trivial model belief, in which every state of belief determines the truth value of every formula: for every state of belief B and every formula P, either P is in B or ¬P is in B. In other words, such a theory admits no state of belief with uncertainty as to whether some formula is true. This is of course unsatisfactory, as uncertain states of beliefs are the bread and butter of reasoning about belief. Many have attempted to recast the above belief revision puzzle to escape the Fuhrmann triviality result; all reasonable approaches revolve around rejecting either the preservation or the reflectivity property. Orthodoxy, as Gillies points out, is to retain reflectivity at the expense of preservation.

In this article, Gillies gives a new analysis of the situation, and agrees with the orthodox answer of keeping reflectivity at the expense of preservation, but for reasons that are different than the ones usually invoked. He carefully analyses the problem using a refined model of belief based on possible worlds, and eventually identifies persistence of epistemic commitments as the main culprit. Roughly speaking, persistence of epistemic commitments means that when an agent believes that the current world is one in some set s in which every world commits her to believing P, and the agent later believes that the current world is one in some set s' in s, then every world in s' still commits her to believing P. Gillies proves that persistence of epistemic commitments (in conjunction with reasonable requirements on commitments in general) is equivalent to the preservation property. He then argues that in the presence of might and must statements, persistence of epistemic commitments is less attractive as a property, and introduces a belief revision model that does not exhibit persistence of epistemic commitments in general. That model of belief revision is nontrivial, satisfies the reflectivity property, does not exhibit the preservation property in its full generality, but retains preservation for non-modal formulas, which agrees with intuition.

This Week's Tabs

First off, an oldie but goodie, Sistla on a syntactic characterization of liveness and safety properties in linear time temporal logic. (It is still an open problem, I believe, to derive a corresponding resulting for branching time temporal logic - correct me if I'm wrong) - this was originally published in PODC'95:

A recent short paper on the foundations of forcing:

The Handbook of Applied Cryptography by Menezes, van Oorschot and Vanstone is available online:

A recent paper on the problem of parallel prefix computation (on which a student of mine wrote a Master's Thesis a couple of months ago), trying to find the equivalent of Knuth's 0-1 Principle that applies to this problem:

Came across this nice little collection of essays by Gabriel on patterns, software, writing, business, and his life story. Originally published in 1996, now available under a creative commons license:

Have a good weekend.

Visual Dictionary is now Online

Nothing to do with logic, but a friend of mine at Québec-Amérique just dropped me an email point out that the QA Visual Dictionary has been post online.

The site is hosted through Merriam-Webster. I beta-tested the CD version of the dictionary a couple of years back, and I'm glad it has made its way to the online world.

For those who are not familiary with the idea, what is the Visual Dictionary?

A dictionary with a new point of viewthat catches the eye and enriches the mind.

  • 20,000 terms with contextual definitions, developed by terminology experts;
  • 6,000 full-color images of a wide variety of objects from all aspects of life;
  • One essential reference.

The Visual Dictionary is designed to help you find the right word at a glance. Filled with stunnin illustrations labeled with accurate terminology in up to six languages, it is the ideal language-learning and vocabulary dictionary for use at school, at home or at work.

When you know what something looks like but not what it's called, or when you know the word but can't picture the object, The Visual Dictionary has the answer. In a quick look, you can match the word to the image.

It is an interesting exercise to try and think what such a dictionary might look like for an abstract domain such as logic or computer science.

Call for Papers - DEON'08

Ninth International Conference on
Deontic Logic in Computer Science
(DEON'08)

Luxembourg, 15-18 July, 2008

Call for Papers

The biennial DEON conferences are designed to promote interdisciplinary
cooperation amongst scholars interested in linking the formal-logical
study of normative concepts and normative systems with computer science,
artificial intelligence, philosophy, organisation theory and law. In
addition to these general themes, DEON2008 will encourage a special
focus on the topic Security and Trust.

There have been eight previous DEON conferences: Amsterdam, December
1991; Oslo, January 1994; Sesimbra, January 1996; Bologna, January 1998;
Toulouse, January 2000; London, May 2002; Madeira, May 2004; Utrecht,
July 2006. Selected papers from each of these conferences have been
published internationally <http://deon2008.uni.lu/publications.html>.

General themes

The Program Committee invites papers concerned with the following topics:

  • the logical study of normative reasoning, including formal systems
    of deontic logic, defeasible normative reasoning, the logic of
    action, and other related areas of logic
  • the formal analysis of normative concepts and normative systems
  • the formal representation of legal knowledge
  • the formal specification of aspects of norm-governed multi-agent
    systems and autonomous agents, including (but not limited to) the
    representation of rights, authorisation, delegation, power,
    responsibility and liability
  • the formal specification of normative systems for the management
    of bureaucratic processes in public or private administration
  • applications of normative logic to the specification of database
    integrity constraints
  • normative aspects of protocols for communication, negotiation and
    multi-agent decision making

Specific Security and Trust themes

DEON2008 has a special focus on logical approaches to deontic notions in
computer science in security and trust, encompassing applications in
e-commerce as well as traditional areas of computer security. Topics of
interest in this special theme include, but are not limited to:

  • digital rights management
  • electronic contracts, including:
    o service level agreements
    o digital media licenses
  • authorization
  • access control
  • security policies
  • privacy policies
  • business processes
  • regulatory compliance

We welcome both theoretical work (formal models, representations,
specifications, logics, verification) and implementation-oriented work
(architectures, programming languages, design models, simulations,
prototype systems) on these specific topics.

Submission details

Authors are invited to submit an original, previously unpublished,
research paper pertaining to any of these topics. The paper should be in
English, and should be no longer than 15 pages, or approximately 7500
words. The first page should contain the full name and contact
information for at least one of the authors, and it should contain an
abstract of no more than ten lines. Authors should submit their papers
electronically, in either .pdf or .ps format.

Authors should consult the instructions on the website
<http://deon2008.uni.lu/submission.html> for submitting their papers
electronically. (Anyone who is unable to send the paper electronically
and wishes to submit hard copies should inquire for details from one of
the program co-chairs at the addresses given below.) Authors are
especially encouraged to use the Springer LNCS styles if possible; these
are available at http://www.springer.de/comp/lncs/authors.html. The
deadline for submission is 11 January 2008.

Timetable

  • 11 January 2008: submission deadline
  • 22 February 2008: notification of acceptance
  • 21 March 2008: deadline for final, camera-ready versions
  • 15 July 2008: NORMAS 2008 workshop
  • 16-18 July 2008: DEON'08 conference

Publication

Copies of the workshop proceedings, will be provided to all
participants. The proceedings will be published with Springer in their
LNCS/LNAI series. In addition, revised versions of selected papers from
the workshop will be published in a special issue of the Journal of
Applied Logic (Elsevier).

Invited speakers

To be announced.

General and program co-chairs:

Ron van der Meyden
School of Computer Science and Engineering
University of New South Wales
Sydney NSW 2052, Australia
meyden@cse.unsw.edu.au
tel.: +61 2 8306 0460
fax: +61 2 8306 0405

Leon van der Torre
Individual and Collective Reasoning
Computer Science and Communication
University of Luxembourg
6 rue Richard Coudenhove-Kalergi
L-1953, Luxembourg
leon.vandertorre@uni.lu
tel.: +352-4666445261
fax: +352-4666445500

Organizing committee co-chairs

Martin Caminada
Individual and Collective Reasoning
Computer Science and Communication
University of Luxembourg
martin.caminada@uni.lu
tel.: +352-4666445485
fax: +352-4666445500

Gabriella Pigozzi
Individual and Collective Reasoning
Computer Science and Communication
University of Luxembourg
gabriella.pigozzi@uni.lu
tel.: +352-4666445442
fax: +352-4666445500

Program committee

Paul Bartha (University of British Columbia)
David Basin (ETH Z?ºrich)
Guido Boella (University of Torino)
Jan Broersen (Universiteit Utrecht)
Mark Brown (Syracuse University)
Jos?© Carmo (University of Madeira)
Fr?©d?©ric Cuppens (ENST-Bretagne Rennes)
Robert Demolombe (IRIT Toulouse)
Frank Dignum (Universiteit Utrecht)
Lou Goble (Willamette University)
Joerg Hansen (University of Leipzig / Bachhaus)
Risto Hilpinen (University of Miami)
John Horty (University of Maryland)
Andrew Jones (King's College London)
Ninghui Li (Purdue University)
Lars Lindahl (University of Lund)
Alessio Lomuscio (Imperial College London)
Heiko Ludwig (Watson Research Center, IBM)
Paul McNamara (University of New Hampshire)
John-Jules Meyer (Universiteit Utrecht)
Ron van der Meyden (University of New South Wales)
John Mitchell (Stanford University)
Rohit Parikh (City University of New York)
Adrian Paschke (Technical University Dresden)
Henry Prakken (Universiteit Utrecht / University of Groningen)
Babak Sadighi Firozabadi (Swedish Institute of Computer Science,Stockholm)
Filipe Santos (ISCTE Portugal)
Giovanni Sartor (University of Bologna)
Marek Sergot (Imperial College London)
Carles Sierra (IIIA-CSIC)
Yao-Hua Tan (Vrije Universiteit Amsterdam)
Leon van der Torre (University of Luxembourg, co-chair)
Vicky Weissman (Cornell University)

For further information, see the DEON'08 website: <http://deon2008.uni.lu>

This Week's Tabs

Closing off of my browser's tabs, and passing on some links of interest that showed up during the last week:

Related to my last posting, Neil Koblitz wrote a rather controversial article last year on the uneasy relationship between cryptography and mathematics:

Review of a book by Jonathan Sutton, Without Justification, on the thesis that epistemically justified belief is knowledge:

Bob Harper on why call-with-current-continuation corresponds to the law of excluded middle via the Curry-Howard isomorphism:

A book by Nordström, Petersson, and Smith on Martin-Löf's type theory:

A new paper by Joe Halpern and Peter Grunwald on updating sets of probability distributions:

Tim Chow wrote a nice readable introduction to Cohen's forcing, although you'll have to go elsewhere for the theorems and the actual technical details:

A very nice introduction to many different logical ideas by Ted Sider, including propositional and first-order classical logic, many-valued and intuitionistic logics, modal logics and counterfactuals:

Dan Brown, while looking for dualities, ran across and brought to my attention this page by David Johnson on challenges in theoretical computer science. It is old (2000), but I find it somewhat inspiring:

Koblitz on Cryptography and Theorem Proving

Between lectures being over for my OO Design course, and the CPA grant submission deadline just over with, I can finally start attacking the accumulated pile that has been clamouring for my attention for the past three weeks. Time to start weeding out before the Christmas holidays.

Andy Gordon points us to an article by Neil Koblitz raising concerns about the application of automated theorem proving (ATP) to cryptography.

Koblitz is not an enthusiast of ATP, which is hardly surprising, seeing as mathematicians have on the whole never been very enthusiastic about mechanized proofs.

He makes several points in the article, and it is worthwhile to try and disentangle them. As far as I can tell, he has two main points, essentially orthogonal.

The first point he makes is specifically about ATP as it pertains to cryptographic reductions proofs. Recall that a cryptographic reduction proof is attempting to prove a property of a system by, very roughly speaking, reducing it to an assumed property of a cryptographic primitive. In other words, if the system does not have the claimed property, we can reduce it to a failure of some claimed property of a cryptographic primitive. Koblitz basically claims that the formalizations of cryptographic reduction proofs that people have used in their ATP approaches to provide cryptographic properties are not good adequate formalizations, and miss out on some of the subtleties in the hand-derived reduction proofs.

That is a valid point, and as I said, it really has nothing to do with theorem proving (although it comes to light especially when someone attempts to formalize cryptographic arguments in the language of a theorem prover), but rather has to do with the adequacy or inadequacy of a formal language to express a particular property or line of reasoning. Cryptographic reduction proofs are notoriously subtle. And an artificial language (that is, a language which is not the whole language of mathematics, but a more restricted language) will fail to capture some of those subtleties. But then, again, it is a desire to understand those subtleties, and figure out a language in which those subtleties can be made clear, that has led to the search for suitable languages that are general enough to capture the kind of reasoning that occurs in cryptographic proofs, but abstract away from some of the more technically painful details. (I like to think of this as "Design Patterns for Cryptographic Proofs".) A particularly interesting example, a personal favourite, if you will, is the logic of Impagliazzo and Kapron from FOCS'03.

There are many others out there, and the jury is still out on what the best logic is, if such a beast even exists. But the key point of such a logic is to have a proof within the logic be sound with respect to some standard cryptographic semantics: valid reasoning in the logic leads to valid reasoning in the underlying cryptographic model. (It will usually not be complete, in the sense that you can do more in the underlying cryptographic model; but you are guaranteed that if you do something in the logic, it will work out in the cryptographic model too.) One interpretation of Koblitz' critique is that the underlying cryptographic model taken by these logics is often the wrong one. This may be true; I am not a cryptographer. But that does not mean that the whole approach is doomed, and the goal of finding a simpler language for expressing cryptographic arguments that still lets you do valid cryptographic reasoning I think is a worthy one.

The second point Koblitz makes in his article is a much more general critique than whether current approaches to ATP have anything to say to cryptographers: he questions whether ATP has anything interesting to say to mathematicians as a whole. He says that ATP has not, to date, discovered any nontrivial theorem not already proved by mathematicians, aside from some odd conjecture here and there, such as Robbins problem in Boolean algebra theory, or controversial proofs, such as Appel & Haken's computer-assisted proof of the four-colour theorem.

Koblitz does make the distinction between using ATP to come up with a proof for an hitherto unproved result, versus verifying that a proof is correct by formalizing it in a theorem prover. And he seems to be quite comfortable with the use of ATP in the latter case. But of course, that raises some questions... what would he say to Gonthier's formalization of Appel & Haken's proof in Coq? It is a verification of an existing proof, except here the original proof was computer assisted.

In any case, one big question, from a computer-scientist point of view, is what about all the uses of ATP to prove correctness of processors, or to prove metatheoretic arguments in programming language semantics, such as type system soundness results? I think the crux of the matter here is that Koblitz does not view, say, a soundness theorem for a type system as a bona fide theorem. And I believe that the implicit definition of a theorem in many mathematicians mind somewhat begs the question of whether ATP can prove theorems: a theorem is a result for which the proof provides insight as to why the result is true. This makes a theorem a subjective thing, which meshes well with the fact that what makes a proof a proof for a mathematician is already subjective (if not socially subjective).

I am willing to concede that a purely formal proof, derived by searching the space of all proofs for one that happens to prove a theorem, may offer little to someone wanting to see a proof to get some insight as to why a theorem is true.

But I'll remind everyone that the formalist program is dead. Gödel, among others, saw to that. No one I know advocates purely formal ATP as a replacement for intuition and insight. In fact, if there's anything that becomes clear after using a theorem prover, it's that intuition and insight are required to guide the prover towards the result one wants. Really, one should think of the prover as a helpful assistant, available for proving small annoyingly technical steps, leaving the user in charge of coming up with the big steps of the proof. (This is an extreme simplification of what happens when using a theorem prover, but I will stand by it as a motivating analogy.) And I do firmly believe that in that role, ATP has a lot to offer.

SIGACT News Logic Column 20

The new SIGACT News Logic Column is available. It is very short, consisting of the synopsis of Simon Kramer's Ph.D. Thesis from l'École Polytechnique Fédérale de Lausanne, Logical Concepts in Cryptography.

As I state in the editorial blurb, Simon introduces a new logic for reasoning about cryptographic protocol called CPL (Cryptographic Protocol Logic). The result is an intriguing blend of modal logic and process algebra, reminiscent of spatial logics for process calculi, but extended with knowledge, belief, and provability operators.

On Coinduction

A slow weekend - I needed the rest.

Dexter Kozen just uploaded an article on Coinductive Proof Principles for Stochastic Processes to CoRR, wherein he gives "an explicit coinduction principle for recursively-defined stochastic processes. The principle applies to any closed property, not just equality, and works even when solutions are not unique. The rule encapsulates low-level analytic arguments, allowing reasoning about such processes at a higher algebraic level. We illustrate the use of the rule in deriving properties of a simple coin-flip process." I looked at this paper a while back, and it remains interesting.

On a different bit of news, the CSLI Lecture Notes are now freely available online, gracieuseté of the Stanford Medieval and Modern Thought Text Digitization Project.

That series includes the classics

  • Troelstra's Lectures on Linear Logic
  • Goldblatt's Logic of Time and Computation

and to remain in the spirit of coinduction, two of the core monographs on the theoretical foundations of coinduction:

  • Aczel's Non-well-founded Sets
  • Barwise and Moss's Vicious Circles : On the Mathematics of Non-wellfounded Phenomena

(via Richard Zach's LogBlog)

J Strother Moore's Visit to NU

J. Strother Moore, the Moore in Boyer-Moore, just completed a two-days visit to Northeastern. He gave a talk, Machines Reasoning about Machines, complete with demo using the ACL2 theorem prover. ACL2 is a first-order theorem prover for proving equational properties of programs written in an applicative subset of Common Lisp. It is amazingly fast, and comes with a well-equipped library of theories to work with.

Fascinating work, if only because I am set to teach an undergraduate course using ACL2 next semester with Pete Manolios.

Now if we could just move the ACL2 technology to a typed language and extend it to higher-order, I'd be happy. (I know, I know, there are fundamental limitations re the latter, but still, even getting halfway there would be fun.)

Pete told me that the two books on ACL2:

  • Computer-Aided Reasoning: An Approach
  • Computer-Aided Reasoning: ACL2 Case Studies

are now available in paperback for a very reasonable fee.

Two TCS Awards

Two prizes will be awarded at ICALP 2008 next spring in Reykjavík.

  • The EATCS Award 2008: the call for nominations is now available. The EATCS Award committee consists of Catuscia Palamidessi, David Peleg (chair), and Emo Welzl. Nominations should be sent to David Peleg by December 1, 2007. Further details.

(Via Process Algebra Diary)

Repairing Models

Model checking is the problem of determining whether a formula of some logic (often a temporal logic) is true in a given model. It is a central technique in formal verification, where the model is a representation of the system being analyzed, and the formula represents a specification that must be met. Suppose that the model does not satisfy the formula. What then? Most model-checking techniques can extract a counterexample from the fact that a model fails to satisfy a formula, and the standard approach is then to revise the model based on the counterexamples (in other words, debugging the system to correct whatever led to the failure of the specification). An obvious question is whether the model can be automatically modified in light of the counterexamples to make it satisfy the specification.

The following paper describes a special form of this problem, where the model is modified by removing transitions only. (This variant the authors call repairing the model.) They show that the problem of repairing a Kripke structure M in order for it to satisfy a CTL (or ATL) formula that M failed to satisfy is NP-complete, and give an algorithm for reparation by translating the problem to a SAT instance that can be solved by throwing it to a SAT solver.

Model and Program Repair via SAT Solvers, by P. Attie and J. Saklawi.

Automated Analysis of Non-repudiation Properties

Formal verification of security protocols has traditionally focused on properties such as authentication and secrecy. A recent technical report from INRIA describes a tool for formal verification of non-repudiation properties:

Automatic Methods for Analyzing Non-Repudiation Protocols with an Active Intruder, by F. Klay, J. Santiago, and L. Vigneron

Non-repudiation, very roughly speaking, is a property that guarantees that agents interacting via a protocol cannot deny that they have performed a given action, for instance, sending a message. The exact details of what is guaranteed and what counts as evidence vary, of course. Many protocols have been designed to try to enforce non-repudation in specific situations, but they are surprisingly difficult to get right. The paper above describes an extension of the AVISPA model checker that can check non-repudiation properties expressed in a temporal logic extended with an "agent knowledge" predicate.

CMU Courses on Logic and Security

Two interesting courses from CMU: Bob Harper is running a graduate reading course on Languages and Logics for Security, and Anupam Datta has a course on Foundations of Security and Privacy.

(Via A Toosht o Whigmaleeries)