|
2 | 2 | A Benchmarking and Performance Analysis Framework
|
3 | 3 |
|
4 | 4 | The code base includes three sub-systems. The first is the collection agent,
|
5 |
| -`pbench-agent`, responsible for providing commands for running benchmarks |
6 |
| -across one or more systems while properly collecting the configuration data for |
7 |
| -those systems as well as specified telemetry or data from various tools (`sar`, |
8 |
| -`vmstat`, `perf`, etc.). |
9 |
| - |
10 |
| -The second sub-system is the `pbench-server`, which is responsible for |
11 |
| -archiving result tar balls, indexing them, and managing access to their |
12 |
| -contents. |
13 |
| -It provides a RESTful API which can be used by client applications, such as |
14 |
| -the Pbench Dashboard, to curate the results as well as to explore their |
15 |
| -contents. |
16 |
| - |
17 |
| -The third sub-system is the Pbench Dashboard, which provides a web-based GUI |
18 |
| -for the Pbench Server allowing users to list and view public results. After |
19 |
| -logging in, users can view their own results, make them available for others |
20 |
| -to view, or delete them. On the _User Profile_ page, a logged-in user can |
21 |
| -generate API keys for use with the Pbench Server API or with the Agent |
22 |
| -`pbench-results-push` command. The Pbench Dashboard also serves as a platform |
23 |
| -for exploring and visualizing result data. |
| 5 | +[Pbench Agent](docs/Agent/agent.md), responsible for collecting configuration |
| 6 | +data for test systems, managing the collection of performance tool data from |
| 7 | +those systems (`sar`, `vmstat`, `perf`, etc.), and executing and postprocessing |
| 8 | +standardized or arbitrary benchmarked workloads (`uperf`, `fio`, `linpack`, as |
| 9 | +well as real system activity). |
| 10 | + |
| 11 | +The second sub-system is the [Pbench Server](docs/Server/server.md), which is |
| 12 | +responsible for archiving result tar balls and providing a secure |
| 13 | +[RESTful API](docs/Server/API/README.md) to client applications, such as the |
| 14 | +Pbench Dashboard. The API supports curation of results data, the ability to |
| 15 | +annotate results with arbitrary metadata, and to explore the results and |
| 16 | +collected data. |
| 17 | + |
| 18 | +The third sub-system is the [Pbench Dashboard](docs/Dashboard/user_guide.md), which |
| 19 | +provides a web-based GUI for the Pbench Server allowing users to list and view |
| 20 | +public results. After logging in, users can view their own results, publish |
| 21 | +results for others to view, and delete results which are no longer of use. |
| 22 | +On the _User Profile_ page, a logged-in user can generate API keys for use with |
| 23 | +the Pbench Server API or with the Agent `pbench-results-move` command. The |
| 24 | +Pbench Dashboard also serves as a platform for exploring and visualizing result |
| 25 | +data. |
24 | 26 |
|
25 | 27 | ## How is it installed?
|
26 | 28 | Instructions for installing `pbench-agent`, can be found
|
27 | 29 | in the Pbench Agent [Getting Started Guide](
|
28 | 30 | https://distributed-system-analysis.github.io/pbench/gh-pages/start.html).
|
29 | 31 |
|
30 |
| -For Fedora, CentOS, and RHEL users, we have made available [COPR |
| 32 | +For Fedora, CentOS, and RHEL users, we have made available [COPR RPM |
31 | 33 | builds](https://copr.fedorainfracloud.org/coprs/ndokos/pbench/) for the
|
32 | 34 | `pbench-agent` and some benchmark and tool packages.
|
33 | 35 |
|
34 | 36 | You might want to consider browsing through the [rest of the documentation](
|
35 | 37 | https://distributed-system-analysis.github.io/pbench/gh-pages/doc.html).
|
36 | 38 |
|
| 39 | +You can also use `podman` or `docker` to pull Pbench Agent containers from |
| 40 | +[Quay.io](https://quay.io/pbench/). |
| 41 | + |
37 | 42 | ## How do I use pbench?
|
38 | 43 | Refer to the [Pbench Agent Getting Started Guide](
|
39 | 44 | https://distributed-system-analysis.github.io/pbench/gh-pages/start.html).
|
|
0 commit comments