This commit changes how the `TorClient` type works, enabling it to be
constructed synchronously without initiating the bootstrapping process.
Daemon tasks are still started on construction (although some of them
won't do anything if the client isn't bootstrapped).
The old bootstrap() methods are now reimplemented in terms of the new
create_unbootstrapped() and bootstrap_existing() methods.
This required refactoring how the `DirMgr` works to enable the same sort
of thing there.
closes#293
This is by no means our final API, but should represent an
improvement. Here instead of having to specify a list of files and
their is-this-optional status, along with a list of command-line
options, we have a single structure that encapsulates all of that
information.
Two advantages here:
- Callers no longer have to remember what the boolean means.
- We can "reload" more easily, by keeping the source object around.
This change also implements the correct behavior for our default
configuration file in `arti::main`: if the file is absent and the
user doesn't list a config file, that's no problem. But if the user
lists _that very same config file, we should insist that it be
present.
This implements the proposal from arti#298, making the
`BenchmarkResults` type be made out of a bunch of new `Statistic` types
(which summarize the mean, median, range, and standard deviation of an
arbitrary value) instead of overloading `TimingSummary` for this
purpose.
This commit puts the native-tls crate behind a feature. The feature
is off-by-default in the tor-rtcompat crate, but can be enabled
either from arti or arti-client.
There is an included script that I used to test that tor-rtcompat
could build and run its tests with all subsets of its features.
Closes#300
We now conduct benchmark tests with multiple concurrent streams (by
default; this is configurable by passing `-p` to `arti-bench`).
Currently, these results just get "flattened" for the purposes of
statistical analysis (as in, results_raw contains the results of each
connection's timing summary, across all benchmark runs). This might be
something we wish to change in future.
The stats summary now also records "best" and "worst" values for each
metric, to give a rough idea of the range of values encountered.
Additionally, we now support writing the benchmark results out to a JSON
file. A future commit may integrate this with CI, so that we have
benchmark results for every commit as a build artefact.
(some documentation was also fixed)
part of arti#292
We now do multiple samples (configurable; default 3) per type of
`arti-bench` benchmark run, and take a mean and median average of all
data collected, in order to hopefully be a bit more resilient to random
outliers / variation.
This uses some `futures::stream::Stream` hacks, which might result in
more connections being made than required (and might impact the TTFB
metrics somewhat, at least for downloading).
Results now get collected into a `BenchmarkResults` struct per type of
benchmark, which will be in turn placed into a `BenchmarkSummary` in a
later commit; this will also add the ability to serialize the latter
struct out to disk, for future reference.
part of arti#292
I found these versions empirically, by using the following process:
First, I used `cargo tree --depth 1 --kind all` to get a list of
every immediate dependency we had.
Then, I used `cargo upgrade --workspace package@version` to change
each dependency to the earliest version with which (in theory) the
current version is semver-compatible. IOW, if the current version
was 3.2.3, I picked "3". If the current version was 0.12.8, I
picked "0.12".
Then, I used `cargo +nightly upgrade -Z minimal-versions` to
downgrade Cargo.lock to the minimal listed version for each
dependency. (I had to override a few packages; see .gitlab-ci.yml
for details).
Finally, I repeatedly increased the version of each of our
dependencies until our code compiled and the tests passed. Here's
what I found that we need:
anyhow >= 1.0.5: Earlier versions break our hyper example.
async-broadcast >= 0.3.2: Earlier versions fail our tests.
async-compression 0.3.5: Earlier versions handled futures and tokio
differently.
async-trait >= 0.1.2: Earlier versions are too buggy to compile our
code.
clap 2.33.0: For Arg::default_value_os().
coarsetime >= 0.1.20: exposed as_ticks() function.
curve25519-dalek >= 3.2: For is_identity().
generic-array 0.14.3: Earlier versions don't implement
From<&[T; 32]>
httparse >= 1.2: Earlier versions didn't implement Error.
itertools at 0.10.1: For at_most_once.
rusqlite >= 0.26.3: for backward compatibility with older rustc.
serde 1.0.103: Older versions break our code.
serde_json >= 1.0.50: Since we need its Value type to implement Eq.
shellexpand >= 2.1: To avoid a broken dirs crate version.
tokio >= 1.4: For Handle::block_on().
tracing >= 0.1.18: Previously, tracing_core and tracing had separate
LevelFilter types.
typenum >= 1.12: Compatibility with rust-crypto crates
x25519-dalek >= 1.2.0: For was_contributory().
Closes#275.
The new `arti-bench` crate does a simple end-to-end benchmark test
embedding Arti: it generates some random data (of configurable amount,
depending on command-line parameters), and then sends said data back and
forth via Arti (which should be configured to use a local Chutney
network).
Additionally, the benchmark can also be run via a local SOCKS5 server
(in order to benchmark the performance via a local Chutney node, for
comparison).
The `tests/chutney/arti-bench.sh` sets up and tears down Chutney as
required to make this work.
This is very much a first cut; there are many things that should
eventually get added, such as support for multiple connections, JSON
output capabilities, running multiple tests, ...