For now, this avoids having to separately handle
AuthorityBuilderError, DirMgrConfigBuilderError, DownloadScheduleConfigBuilderError,
NetworkConfigBuilderError and FallbackDirBuilderError when anyhow is not
used.
Turn off a clippy warning.
The default soft limit is typically enough for process usage on most
Unixes, but OSX has a pretty low default (256), which you can run
into easily under heavy usage.
With this patch, we're going to aim for as much as 16384, if we're
allowed.
Fixes part of #188.
I don't love this approach, but those errors aren't distinguished by
ErrorKind, so we have to use libc or winapi, apparently. At least
nothing here is unsafe.
Addresses part of #188.
The previous code would report all failures to build a circuit as
failures of the guard. But of course that's not right: If we
fail to extend to the second or third hop, that might or might not
be the guard's fault.
Now we use the "pending status" feature of the GuardMonitor type so
that an early failure is attributed to the guard, but a later
failure is attributed as "Indeterminate". Only a complete circuit
is called a success. We use a new "GuardStatusHandle" type here so
that we can report the status early if there is a timeout.
(When we're building a path with a guard, we need to tell the guard
manager whether the path succeeded, and we need to wait to hear
whether the guard is usable.)
The advantage here is that we no longer have to use a futures-aware
Mutex, or a blocking send operation, and therefore can simplify a
bunch of the GuardMgr APIs to no longer be async. That'll avoid
having to propagate the asyncness up the stack.
The disadvantage is that unbounded channels are just that: nothing
in the channel prevents us from overfilling it. Fortunately, the
process that consumes from the channel shouldn't block much, and
the channel only gets filled when we're planning a circuit path.
I tried using -Z minimal-versions to downgrade all first-level
dependencies to their oldest permitted versions, and found that we
were apparently depending on newer features of all three crates.
I'm kind of surprised there were only three.
Also, refactor our message handling to be more like the tor_proto
reactors. The previous code had a bug where, once the stream of
events was exhausted, we wouldn't actually get any more
notifications.
There are some missing parts here (like persistence and tests)
and some incorrect parts (I am 90% sure that the "exploratory
circuit" flag is bogus). Also it is not integrated with the circuit
manager code.
The limitations with toml seemed to be reaching a head, and I wasn't
able to refactor the guardmgr code enough to actually have its state
be serializable as toml. Json's limitations are much narrower.
We'll need `id_pair_is_listed()` to track whether a sampled guard is
(or is not) listed in the consensus.
We'll need `missing_descriptor_for` to see whether we've downloaded
enough microdescs to use a consensus.
(Thank goodness for rust; we messed up the coherency in C here so
many times, but I'm pretty sure that this time around we can't have
gotten it wrong.)
Previously, we'd have to declare the field for a parameter in one
place, its default in a second, and its consensus key in a third.
That's error-prone and not so fun! This patch changes the
way we declare parameters so that we declare a structure once,
and macros expand it to all do the right thing.
This required a few new traits and implementations to ensure
uniformity across the types that can go in parameters: We need every
parameter type to implement TryFrom<i32> and to implement
SaturatingFromInt32.
Eventually we might want SaturatingFromInt32 to be a more generic
SaturatingFrom, but that's not for now.
Doing this makes the code faster, lets us throw away some code, and
makes it easier to add a "choose-N-disjoint relays" implementation.
See large comment about plusses and minuses of new code. (Note that
the old implementation wasn't constant-time either.)
On torspec!40, Mike says:
I don't think there is a practical difference here. As per
Section 2.4.5, if 60 seconds is not enough and causes the
liveness test to fail due to too many timeouts, we will double
the initial timeout.
This makes our behavior the same as C tor.
The C Tor implementation doesn't do this, and Mike says:
I think it is a reasonable enough assumption that if Tor has
restarted, this kind data is no longer fresh enough to be
accurate for this purpose. This is also only 20 circuits here,
and typical timeouts are now around 1-2 seconds or less.. So a
restarted client with a timeout that is too low for a new
internet connection will figure this out pretty quickly. I think
that is OK.
(from torspec!40)
Now that we have const generics, we can use them. We can also avoid
an extra clone in the implementation for [u8; N].
Nothing in our codebase requires that we use Reader or Writer on a
GenericArray holding anything other than u8, so I've switched back
to the more efficient implementation there.
I've added a fuzzer case for the new method, but apparently rustc nightly isn't working too
well with fuzzers for me; I'm going to try it tomorrow.