I thought it would be safe enough to return a few extra threads,
(since we happened to already get the relevant messages out of the
database). The problem is that then requires the caller to carefully
read the number of threads returned and adjust its next "first" value
accordingly. The interface is much simpler to use if we simply return
exactly what is asked for and no more.
This serves me right for committing untested code. The
notmuch_query_search_threads was totally broken, (it didn't properly
treat -1 as being unlimited and instead returned no threads in that
case).
The library interface now allows the caller to do incremental searches,
(such as one page of results at a time). Next we'll just need to hook
this up to "notmuch search" and the emacs interface.
It's important to have the names present for determining whether a
thread is worth reading or not. We may want to think about
abbreviating the list somehow if it is excessively long (or redundant
as in bugzilla-daemon, bugzilla-daemon, bugzilla-daemon, etc.).
We never did export any interface to get at these, and when I went to
use these, I found them inadequate, (because I wanted to distinguish
address found in from: from those found in To:). Meanwhile, it was
easy enough to extract addresses with a search like:
notmuch show tag:sent | grep ^To:
so the storage of contact terms was just wasting space. Stop that.
This will allow for things like the database path to be specified
without any cheesy NOTMUCH_BASE environment variable. It also will
allow "notmuch reply" to recognize the user's email address when
constructing a reply in order to do the right thing, (that is, to use
the user's address to which mail was sent as From:, and not to reply
to the user's own addresses).
With this change, the "notmuch setup" command is now strictly for
changing the configuration of notmuch. It no longer creates the
database, but instead instructs the user to call "notmuch new" to do
that.
Previously, the top-level Makefile was explicitly adding -I./lib to
the compiler flags. However, that's something that's much better done
from within the Makefile.local fragment within the lib directory
itself.
I saw this recommendation in the implementation notes for "Recursive
Make Considered Harmful" and then the further recommendation for
implementing the idea in the GNU make manual.
The idea is that if any of the files change then we need to regenerate
the dependency file before we regenerate any targets.
The approach from the GNU make manual is simpler in that it just uses
a sed script to fix up the output of an extra invocation of the
compiler, (as opposed to the approach in the implementation notes from
the paper's author which use a wrapper script for the compiler that's
always invoked rather than the compiler itself).
The idea here is that every Makefile at each lower level will be an
identical, tiny file that simply defers to a top-level make.
Meanwhile, the Makefile.local file at each level is a Makefile snippet
to be included at the top-level into a large, flat Makefile. As such,
it needs to define its rules with the entire relative directory to
each file, (typically in $(dir)). The local files can also append to
variables such as SRCS and CLEAN for files to be analyzed for
dependencies and to be cleaned.