Hi folks.

I have decided to move Close Encounters of the Logical Kind to Wordpress:

http://closeencounterslogicalkind.wordpress.com

See you there...

A hodgepodge of logic-related papers that have come across my virtual desk these last few weeks.

First though, I should mention the collection of online tutorials and textbooks on logic, both introductory and advanced, that Henri Galinon posted at the Theoreme Logic Toolbox. These include books on model theory, the theory of truth, modal logic, conditional logic, proof theory and substructural logics, many-valued logics, and linguistic applications. (Courtesy of Richard Zach from LogBlog)

Dosen and Petric have a paper out on arXiv on coherence for modalities: "Abstract: Positive modalities in systems in the vicinity of S4 and S5 are investigated in terms of categorial proof theory. Coherence and maximality results are demonstrated, and connections with mixed distributive laws and Frobenius algebras are exhibited."

- Coherence for Modalities, by K. Dosen and Z. Petric

The BEATCS concurrency column, edited by Luca Aceto, presents an article by Dale Miller on operational semantics specifications in logic, showing that "specifications written in SOS, abstract machines, and multiset rewriting, are closely related to Horn clauses, binary clauses, and (a subset of) linear logic, respectively." Plus a host of other interesting things, too.

- Formalizing Operational Semantic Specifications in Logic, by Dale Miller

Beyersdorff et al have a paper out on the complexity of common decision problems for propositional default logic, where they systematically restrict the set of allowed propositional connectives. Probably interesting if you care about default logic in practice.

- The Complexity of Reasoning for Fragments of Default Logic, by O. Beyersdorff, A. Meier, M. Thomas, and H. Vollmer

Dov Gabbay and Karl Schlechta give an overview of logical and semantical rules for nonmonotonic and related logics.

- Roadmap for Preferential Logics, by D. Gabbay and K. Schlechta

I've been quiet here, lately. I was away, visiting Andy Gordon at Microsoft Research, Cambridge for the summer, where we worked on representing stateful computations with types, based on Andy et co.'s RCF language. I will talk about this in the coming months, I believe.

But I'm back in Boston now, and the buzz of the Fall term just starting sounds like a jet engine. I'm teaching two courses, an undergrad Object-Oriented Design course, and a graduate Cryptography and Communication Security course. More on these as the term progresses.

In the latest SIGACT News, there is a nice article about the kind of theory research going on at Google. Not logic per se, but interesting nonetheless. Unsurprisingly, much focus on algorithms and on auctions. Which does remind me of a nice chat I had with a researcher from Google several months ago, after which we concluded that there is a need for a nice specification language for auctions. As far as I can tell, that does not exist. (If you know otherwise, please holler.)

- Theory Research at Google, by G. Aggarwal, N. Ailon, F. Constantin, E. Even-Dar, J. Feldman, G. Frahling, M. Henzinger, S. Muthukrishnan, N. Nisan, M. Pal, M. Sandler, A. Sidiropoulos.

Flushing buffers before going off to spend a few months at MSR in Cambridge.

The Notre Dame Philosophical Reviews are quite interesting, covering books in philosophy, and usually going into an amount of detail sufficient to convey quite a lot of information about the topic at hand.

The following recent books and reviews have a content susceptible of interesting a logically-minded individual:

Another call for papers, this time for a Digital Rights Management workshop, co-located with CCS'08 in Alexandria (VA) in October.

Submission deadline on May 23rd.

---------------------------------------------------------------------- Call for Papers EIGHTH ACM DRM WORKSHOP (Co-located with ACM-CCS 2008, Alexandria, Virginia, USA) http://www.ece.unm.edu/DRM2008/ Submission deadline: May 23, 2008 Workshop: October 27, 2008 - Alexandria, Virginia, USA ---------------------------------------------------------------------- The ACM Workshop on Digital Rights Management is an international forum that serves as an interdisciplinary bridge between areas that can be applied to solving the problem of Intellectual Property protection of digital content. These include: cryptography, software and computer systems design, trusted computing, information and signal processing, intellectual property law, policy-making, as well as business analysis and economics. Its purpose is to bring together researchers from the above fields for a full day of formal talks and informal discussions, covering new results that will spur new investigations regarding the foundations and practices of DRM. This year's workshop, the eighth in the series, continues this tradition. As in the previous editions, it is sponsored by ACM SIGSAC and is held in conjunction with the ACM Conference in Computer and Communications Security (CCS). Topics of interest include but are not limited to: * anonymous publishing, privacy and DRM * architectures for DRM systems * business models for online content distribution, risk management * copyright-law issues, including but not limited to fair use * digital goods and online multiplayer games * digital policy management * DRM and consumer rights, labeling and competition law * implementations and case studies * information theory and combinatorics, including marking assumptions and related codes * robust identification of digital content * security issues, including but not limited to authorization, encryption, tamper resistance, and watermarking * regulatory authority for DRM, interoperability * supporting cryptographic technology including but not limited to traitor tracing, broadcast encryption, obfuscation * threat and vulnerability assessment * trusted computing, attestation, hardware support for DRM, side-channels * usability aspects of DRM systems * web services related to DRM systems IMPORTANT DATES: Submission deadline: May 23, 2008 Notification of acceptance: July 10, 2008 Camera-ready version: August 8, 2008 Workshop: October 27, 2008 INSTRUCTIONS FOR AUTHORS: Submissions must not overlap with papers that have been published or that are simultaneously submitted to a journal or a conference with proceedings. Submissions should be at most 15 pages excluding the bibliography and well- marked appendices, using at least 11-point font and reasonable margins. Committee members are not required to read the appendices, and thus submissions should be intelligible without them. Each submission should start with the title, abstract, and names and contact information of authors. All submissions will be handled electronically. For submission instructions and further information please point your web-browser to: http://www.ece.unm.edu/DRM2008/ PROCEEDINGS: Accepted papers will be published in an archival proceedings volume by ACM Press and will be distributed at the time of the workshop. ORGANIZATION: Program Chairs - Gregory Heileman (U. New Mexico, USA) - Marc Joye (Thomson, France) Program Committee - Olivier Billet (Orange Labs, France) - Xavier Boyen (Voltage, USA) - Alain Durand (Thomson, France) - Rudiger Grimm (U. Koblenz, Germany) - Bill Horne (Hewlett-Packard, USA) - Hongxia Jin (IBM, USA) - Aggelos Kiayias (U. Connecticut, USA) - David Kravitz (Motorola Labs, USA) - Brian LaMacchia (Microsoft, USA) - William Lehr (MIT, USA) - Nasir Memon (Polytechnic U., USA) - Fernando Perez-Gonzalez (U. Vigo, Spain) - Rei Safavi-Naini (U. Calgary, Canada) - Bin Zhu (Microsoft, China) General Chair - Peng Ning (NCSU, USA) (also General chair for ACM-CCS 2008) Steering Committee - Joan Feigenbaum (Yale U., USA) - Aggelos Kiayias (U. Connecticut, USA) - Rei Safavi-Naini (U. Calgary, Canada) - Tomas Sander (Hewlett-Packard, USA) - Moti Yung (Google & Columbia U., USA)

The Scheme, Haskell, and ML workshops - although I read that the Haskell workshop is now the Haskell symposium - will all be held with ICFP (the International Conference on Functional Programming) in Vancouver, BC, in late September. Here are the relevant web pages:

- 2008 Workshop on Scheme and Functional Programming
- ACM SIGPLAN 2008 Haskell Symposium
- 2008 ACM SIGPLAN Workshop on ML

Submission deadlines for all of these are in late June.

From Carla Romero, at CRA. She notes that the deadline for student grants is April 15th.

CRA-W/CDC Summer School on Logic Programming and Computational Logic http://www.cs.nmsu.edu/~ipivkina/compulog.htm New Mexico State University Las Cruces, NM, USA July 24-27, 2008 New Mexico State University is happy to announce the CRA-W/CDC Summer School on Logic Programming and Computational Logic. The summer school will be held on the campus of New Mexico State University in beautiful Las Cruces, New Mexico. The summer school is intended to encourage WOMEN GRADUATE STUDENTS, POST-DOCTORAL STUDENTS, and YOUNG RESEARCHERS to develop interest and expertise in the important areas of constraints, logic programming, answer set programming, computational logic and their applications. Exceptional undergraduate students in their senior year are also encouraged to attend. The summer school will include lectures as well as other events aimed at providing a complete and fulfilling learning experience. The lectures will be given by internationally renowned Researchers who have made significant contributions to the advancement of these disciplines. The summer school is a good opportunity for quickly acquiring background knowledge on important areas of computational logic. The summer school will consist of six 1/2 day tutorials on the following topics: * Theoretical Foundations of Logic Programming [Miroslaw Truszczynski, U. of Kentucky] * Answer Set Programming [Torsten Schaub, U. of Potsdam] * Implementation and Execution Models for Logic Programming [Manuel Hermenegildo, Polytechnic Univ. of Madrid] * Logic Programming and Multi-agent Systems [Francesca Toni, Imperial College] * Foundations of Constraint and Constraint Logic Programming [TBA] * Foundations of Semantic Web and Computational Logic [Sheila McIlraith, University of Toronto] There is no charge for participation to the summer school and a number of fellowships are available to women participants, that will cover all reasonable travel expenses. Applicants are asked to submit an application for admission to the summer school composed of the following items: 1. a one page statement of interest, explaining your research background and what you expect to gain from the summer school 2. a short (2-page) vitae Applications should be submitted in electronic form to: epontell@cs.nmsu.edu and ipivkina@cs.nmsu.edu All submissions will be acknowledged with an email. If you do not receive acknowledgment within 3 working days, please email Enrico Pontelli (epontell@cs.nmsu.edu). Lodging will be available at local hotels; we will also provide a number of affordable accommodations on the NMSU campus. Important dates --------------- * Requests for student grants: April 15, 2008; * Application for Admission: April 25, 2008; * Notification of Admission and grants: May 1st, 2008; * Summer School: July 24-27, 2008 Organizers ---------- * Enrico Pontelli, New Mexico State University, USA * Inna Pivkina, New Mexico State University, USA * Son Cao Tran, New Mexico State University, USA

The big news of last week, at least in certain circles, is the death of Gary Gygax, creator of what was in my youth the hugely popular game Dongeons & Dragons, in the view of many a predecessor and inspiration for the modern computer gaming industry.

Of course, online comics have had a field day with this, with some of the best tributes to Gygax that the medium can deploy. My personal favorite is *xkcd*'s:

A deeper reflection on the history and influence of D&D can be found in Paul La Farge's 2006 essay in *The Believer* (from *Grand Text Auto*):

- Paul La Farge, Destroy All Monsters

Slightly closer to home, I came across this wonderful essay over the weekend, a critique by Paul Lockhart of current K-12 mathematics education in the US. The bottom line is that we fail miserably to teach mathematics in a way that highlights how interesting it is. For too many students, mathematics is drudgery and symbols manipulation without rhyme or reason. Part of the problem is that culture as a whole does not readily see mathematics as art, something that becomes obvious as soon as one really starts understanding the subject. For instance, most characterizations of proofs as given by mathematicians are couched in term of aesthetic value judgments, by far the biggest hurdle in most standard maths undergrad curricula. (The answers to the question "But what is an acceptable proof?" I asked during my first year at McGill in retrospect shared much in common with answers to the question "but what is a nice painting?".)

- Paul Lockhart, A Mathematician's Lament

In a wonderful happenstance of synchronicity, I also came across the following blogpost by Kathy Sierra over at *Creating Passionate Users* (dated from almost two years ago, so nothing recent), about aesthetics in program development. It has been my personal belief for a while now that software development is very much like architecture, half technique and half art, and a well-developed aesthetic sense is in fact necessary for writing maintainable and correct code.

- Kathy Sierra, Code Like a Girl

Which leads to the obvious question: are we messing up CS education the same way we are messing up Maths education, and for similar reasons?

Back from a short holidays in the White Mountains (NH), snoeshowing and cross-country skying. We stayed at a delightful little bed and breakfast near Gorham, the Mt Washington Bed and Breakfast. I highly recommend it. (Tell Mary Ann that Riccardo sent you.)

I hold a logic discussion group, and this semester we are slowly working our way to Plotkin and Power's work on equational characterizations of effects, an alternative to Moggi's monadic approach based on Power's Freyd categories. Perhaps I will get to talk about this later.

Anyways, we started last month by looking at Lawvere theories, Lawvere's 1960s approach to universal algebra via category theory. The idea is pretty simple. An algebraic structure can be defined by a underlying set A along with a set of equations over the elements of the set. This can be represented rather simply using a category who objects are products of A, that is, A^0, A^1, A^2, ..., and where arrows between elements are operations, including all projections. We can encode the equations by stating that some diagrams commute.

Let's look at a simple example, groups. A group is traditionally described as a tuple (G,*,-,e), where G is a set (the elements of the group), * is a binary operation G x G -> G, - is a unary operation G -> G, and e is a constant (or nullary operation) which we can represent as 1 -> G, where 1 is any one-element set, subject to the following equations:

- x * (y * z) = (x * y) * z
- x * e = x = e * x
- x * (-x) = e = (-x) * x

The Lawvere theory for groups, call it LG, is basically the category with object a^0=1,a^1,a^2,... (for some unspecified object a), and taking arrows to include all projections, and three arrows * : a^2 -> a, - : a -> a, and e : 1 -> a, subject to the following diagrams commuting: (it's a pain to draw diagrams in confluence, so I shan't do it...)

- * o < * , id_a > = * o < id_a , * >
- * o < e , id_a > = * o < id_a , e >
- * o < id_a , - > = e o 1
- * o < - , id_a > = e o 1

(where id_a : a -> a is the identity arrow, o is arrow composition, and <f,g> is the usual product of arrows).

An actual group (i.e., a model for the theory) can be taken to be a functor from the category LG to the category of sets. This functor will map the basic object a of LG to some set G (representing the elements of the group) and the functor properties will ensure that the elements of the set will satisfy the group equations. (Try it!) In other words, a group is a functor LG -> Set.

A good introduction to this is Steve Awodey lecture notes on categorical logic, specifically section 2.1 here:

- S. Awodey, A. Bauer, Lecture Notes: Introduction to Categorical Logic

Now, one question I had at some point is how does this handle algebraic structures defined by implications (that is, Horn clauses). Think of these as conditional equalities. The main example of this that I encounter is Kleene algebras, axiomatized as follows, using Kozen's axiomatization. Recall that a Kleene algebra is an algebra with signature ( K , . , + , 0 , 1 , * ), where ( K , . , + , 0 , 1 ) is an idempotent semiring, and * is a unary operation subject to the following:

- 1 + x.(x*) <= x*
- a + (x*).x <= x*
- if a.x <= x, then (a*).x <= x
- if x.a <= x, then x.(a*) <= x

(Here, x <= y is defined to be x + y = y, which is a partial order when you have an idempotent semiring.) Note that these equations are conditional, that is, the last two equations are really implications. How do you model this using the framework of Lawvere?

The set of models for an equational theory that uses conditional equations form a *quasivariety*. (In contrast, the set of models for an equational theory that uses only unconditional equations form a *variety*; quasivarieties, as the name implies, are almost varieties, but not quite.) The following paper, that appeared in Journal of Algebra 208 (1998), 379-398, seems to hint at the appropriate Lawvere development for quasivarieties, although it is quite technical:

- J. Adamek, H.-E. Porst, Algebraic Theories of Quasivarieties

What I am still not sure of, though, is how to concretely give a description of a theory, given a specific equational theory. In particular, given the axiomatization of Kleene algebras given above, can we concretely describe the category LK in the Adamek-Porst framework that plays the role of Lawvere theory, and for which the functors LK -> Set are exactly the Kleene algebras?

Okay, a couple of people this have been asking me about separation logic, out of the blue, and I am taking it as a sign. This is of interest to me because Andy Gordon and I have been trying to wrap our collective heads around separation logic, a project about which I hope to be able to talk about before long.

So what's the big hoopla? To understand the why of separation logic, you have to first understand the basics of reasoning about programs, Hoare-logic style. Hoare logic is a logical system where the main judgment is of the form {`P}C{Q`

}, where `P`

and `Q`

are first-order logic formulas, and `C`

is a program, generally in some sequential imperative programming language. The interpretation of such a *Hoare triple* is that when executed in a state satisfying formula `P`

, program `C`

either does not terminate, or terminates in a state satisfying formula `Q`

. Formulas `P`

and `Q`

may refer to program variables. Thus, for instance, the triple {`x=0/\y=0}C{res>=0`

} says that program `C`

, when started in a state where `x=y=0`

, may terminate in a state where `res`

is positive (or it may not terminate). Hoare logic comes with a proof system that lets you derive Hoare triples compositionally from Hoare triples talking about subprograms appearing in `C`

. Thus, for example, if you know {`P}C{Q`

} and {`Q}D{R`

}, then you can derive {`P}C;D{R`

}, where `C;D`

represents the sequential composition of `C`

and `D`

.

This works surprisingly well for simple imperative languages, but does not scale easily to the kind of features found in real programming languages. In particular, programs that manipulate pointers have always been problematic. Dealing with programs that manipulate heap memory itself is not so bad, by simply extending Hoare logic with a heap and allowing formula `P`

and `Q`

in a Hoare triple to refer to values on the heap. The problem is that the heap must be treated as a large unit, while what one really wants to do is to be able to reason *locally*, taking into account only the part of the heap that a piece of code reads and modifies. Reynolds and O'Hearn, separately, came up with a rather clean way to express this kind of local reasoning by introducing a *separation* operator into the predicate language, that can be used to say that the heap is separable into two disjoint pieces that do not refer to each other through pointers. Thus, it is possible to reason locally about the heap that a piece of code uses by simply separating out the appropriate portion of the heap, and re-introducing in the postcondition of the code. In other words, if a piece of code `C`

only uses memory cells `c1`

and `c2`

, then we can use heap separation to separate a heap `h`

into `h' * (c1 -> v1, c2 -> v2)`

, have program `C`

act on the local heap `(c1 -> v1, c2 -> v2)`

, perhaps producing a new local heap `(c1 -> v1', c2 -> v2', c3 -> v3)`

(here, `C`

has created a new memory cell `c3`

holding value `v3`

), and the final heap is obtained by reattaching `h'`

to `(c1 -> v1', c2 -> v2', c3 -> v3)`

. This works quite nicely, and I will refer you to the following papers for the history of those early days:

Recently, separation logic has been applied to the problem of reasoning about heap-manipulating concurrent programs. New challenges emerge in this context, such as race conditions. It turns out that separation logic has something to say about managing resources (the heap can be viewed as a resource, so this is maybe not very surprising), and it has been used to reason about lock-free programs, that is, programs that attempt to avoid race conditions without overly using locks, if at all. Unfortunately, there appears to be subtle issues with the rules of the logic when concurrency is added to the mix.

A good overview of separation logic for concurrent programs is O'Hearn's survey paper:

- P. W. O'Hearn, Resources, Concurrency and Local Reasoning

O'Hearn credits Steve Brookes with the solution to the problems of separation logic for concurrent programs:

- S. D. Brookes, A Semantics for Concurrent Separation Logic

I hope the above reading material makes for a reasonable starting point.

This afternoon, lecture on termination in my course *Logic and Computation* (we're using ACL2 as a logic and theorem proving engine, and ACL2 requires a termination proof to admit a definition as a new axiom). And this allowed me to use the Collatz conjecture as an example of the non-triviality of establishing termination even for simple recursive functions.

The Collatz conjecture, also known as the 3n + 1 conjecture, the Ulam conjecture, the Syracuse problem, the hailstone sequence, is named after the German mathematician Lothar Collatz, who proposed it in 1937. The conjecture is that for every positive natural number a[0], the sequence a[0], a[1], a[2], a[3],... given by:

a[n+1] = 1 if a[n] is 1 a[n+1] = a[n] / 2 if a[n] is even a[n+1] = 3a[n] + 1 if a[n] is odd

always eventually hits 1.

Casting this as a termination problem, of course, is just asking whether the function f given by

f(1) = 1 f(n) = f(n / 2) if n is even f(n) = f(3n + 1) if n is odd

terminates on all positive natural number inputs.

No one knows whether this is true or not. Experimental evidence indicates that yes, but that's almost meaningless. In particular, we know that conjecture holds for all inputs up to more than 10^16. A good overview can be found in Weisstein's MathWorld:

Because it is so simple to state, and is extremely easy to understand, the problem has been the focus of attention for many amateur mathematicians and number-dabblers. In fact, I believe it to be almost a rite of passage to spend some time playing with the conjecture when first going through a math degree. (Thankfully, my own flirtation with the problem amounted to wasting no more than two weeks trying to formalize the "pattern" I saw in the sequences produced. Ah, the folly of youth...)

Naturally, professional mathematicians have also looked at the problem, and have come up with a surprising number of connections with seemingly unrelated topics. Best known is Jeffrey Lagarias, now at U. of Michigan, but before that researcher at AT&T Bell Labs.

Quite readable is his paper giving a survey of the problem from a mathematical standpoint:

- The 3x+1 problem and its generalizations, by J. Lagarias.

Also of interest are his annotated bibliographies on the problem:

- The 3x+1 problem: An annotated bibliography (1963--2000), by J. Lagarias
- The 3x+1 Problem: An Annotated Bibliography, II (2001-), by J. Lagarias

Both can be found on the arXiv.

Now, I seem to remember someone establishing a connection between this conjecture and the distribution of prime numbers, but I have not been able to retrace this. If anyone knows what I am thinking of, please drop me a line.

Well, between holiday lethargy (thanks Shawn Standefer), the beginning of the winter semester here at Northeastern, the CSF deadline a couple of weeks back and the upcoming UAI deadline, I have found precious little time to post since the beginning of the year. But I'm back, and ready to tackle the backlog.

I just received the March issue of the Notices of the AMS in the mail yesterday, and two articles caught my eye.

B. Poonen talks about "Undecidability in Number Theory", surveying the solution to Hilbert's Tenth problem by Davis, Putnam, Robinson, and Matiyasevich - coming up with the main theorem that a set is recursively enumerable if and only if it is diophantine (*very roughly*, it can be obtained as the solution to a diophantine equation). This amount, interestingly enough, to programming using diophantine equations. Applications of the result beyond Hilbert's Tenth's are discussed.

Also in the issue, J. Ewing talks about "Where Are Journals Headed? Why We Should Worry About Author-Pay", yet another article discussing the journal subscription prices crisis. The author makes the interesting point, which I had never thought of before, that mathematics (and CS) are hardly in any position to dictate policy, given that we have such a tiny proportion of the published journals. Biomedical sciences, with half the journals, drive the show. And the reason why this is important is that the publication model for biomedical sciences is quite different than that for the core sciences.

I will let you explore those articles in your copious free time. The issue is freely available:

Several months ago I was asked to become a reviewer for the AMS's Mathematical Reviews. the load is light, and I come across papers that are interesting but might not otherwise read. I'll post the reviews here as they get done.

The first one is an article by Gillies on epistemic logic and belief revision. I could not find the paper online, but Gillies has slides for a talk with the same title:

- "What might be the case after a change in view", by A. S. Gillies

And here's the review. (I'm hoping that the fonts will work - latex support on this wiki has not yet been enabled. - EDIT: the Greek font doesn't work...)

A. S. Gillies, ``What Might Be the Case after a Change in View'', Journal of Philosophical Logic, vol. 35, pp. 117--145, 2006.Belief revision, the study of how agents ought to revise their beliefs in the light of new information, is applicable to problems in fields ranging from philosophy to artificial intelligence. This article examines a well known puzzle in belief revision, that involves an inconsistency resulting from two fairly unassuming desiderata for a theory of belief revision.

The first desideratum is that a change in view should always be a minimal change in view. This is expressed by a

preservationproperty: if you do not believe ¬P in some state of belief B, then revising that state with P should result in a new state of belief B' that is at least as strong as (carries as many commitments as) B.The second desideratum, if we further allow reasoning about what might and must be the case, is that agents have some amount of rational introspection with respect to belief. This is expressed by a

reflectivityproperty: a state of belief commits an agent to believe that it might be the case that P if and only if it does not commit the agent to ¬P; formally, the property requires that every state of belief B to satisfy: if ¬P is in B, then Might(P) is in B, and if P is in B, then Must(P) is in B.Both of these properties are quite natural, and have been argued in depth. Unfortunately, in the presence of two basic properties of belief revision with unimpeachable credentials, namely that revising a state of belief by P yields a new state of belief that includes P, and that revising a state of belief by some non-contradictory P should yield a consistent state of belief (where a state of belief is consistent if it does not contain both Q and ¬Q for any Q), these desiderata are not satisfiable.

This non-satisfiability was proved Fuhrmann in ``Reflective modalities and theory change'',

Synthese81, 115-134, 1989. More precisely, Fuhrmann proved that a theory of belief revision with the four properties above can only yield a trivial model belief, in which every state of belief determines the truth value of every formula: for every state of belief B and every formula P, either P is in B or ¬P is in B. In other words, such a theory admits no state of belief with uncertainty as to whether some formula is true. This is of course unsatisfactory, as uncertain states of beliefs are the bread and butter of reasoning about belief. Many have attempted to recast the above belief revision puzzle to escape the Fuhrmann triviality result; all reasonable approaches revolve around rejecting either the preservation or the reflectivity property. Orthodoxy, as Gillies points out, is to retain reflectivity at the expense of preservation.In this article, Gillies gives a new analysis of the situation, and agrees with the orthodox answer of keeping reflectivity at the expense of preservation, but for reasons that are different than the ones usually invoked. He carefully analyses the problem using a refined model of belief based on possible worlds, and eventually identifies

persistence of epistemic commitmentsas the main culprit. Roughly speaking, persistence of epistemic commitments means that when an agent believes that the current world is one in some set s in which every world commits her to believing P, and the agent later believes that the current world is one in some set s' in s, then every world in s' still commits her to believing P. Gillies proves that persistence of epistemic commitments (in conjunction with reasonable requirements on commitments in general) is equivalent to the preservation property. He then argues that in the presence of might and must statements, persistence of epistemic commitments is less attractive as a property, and introduces a belief revision model that does not exhibit persistence of epistemic commitments in general. That model of belief revision is nontrivial, satisfies the reflectivity property, does not exhibit the preservation property in its full generality, but retains preservation for non-modal formulas, which agrees with intuition.

First off, an oldie but goodie, Sistla on a syntactic characterization of liveness and safety properties in linear time temporal logic. (It is still an open problem, I believe, to derive a corresponding resulting for branching time temporal logic - correct me if I'm wrong) - this was originally published in PODC'95:

*Safety, Liveness and Fairness in Temporal Logic*, by A. P. Sistla

A recent short paper on the foundations of forcing:

*Foundations for abstract forcing*, by P. M. Johnson

The *Handbook of Applied Cryptography* by Menezes, van Oorschot and Vanstone is available online:

*Handbook of Applied Cryptogaphy*, by A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone

A recent paper on the problem of parallel prefix computation (on which a student of mine wrote a Master's Thesis a couple of months ago), trying to find the equivalent of Knuth's 0-1 Principle that applies to this problem:

*Much Ado about Two - A Pearl on Parallel Prefix Computation*, by J. Voigtländer

Came across this nice little collection of essays by Gabriel on patterns, software, writing, business, and his life story. Originally published in 1996, now available under a creative commons license:

*Patterns of Software: Tales from the Software Community*, by R. P. Gabriel

Have a good weekend.