Irmin_traces.Trace_stat_summaryConversion of a Stat_trace to a summary that is both pretty-printable and exportable to JSON.
The main type t here isn't versioned like a Stat_trace.t is.
Computing a summary may take a long time if the input Stat_trace is long. Count ~1000 commits per second.
This file is NOT meant to be used from Tezos, as opposed to some other "trace_*" files.
module Def = Trace_definitions.Stat_tracemodule Conf = Trace_stat_summary_confmodule Utils = Trace_stat_summary_utilsmodule Vs = Utils.Variable_summarymodule Seq = Trace_common.Seqtype curve = Utils.curveval curve_t : Utils.curve Repr.tmodule Span : sig ... endA stat trace can be chunked into blocks. A blocks is made of 2 phases, first the buildup and then the commit.
module Watched_node : sig ... endtype bag_stat = {value_before_commit : Vs.t;value_after_commit : Vs.t;diff_per_block : Vs.t;diff_per_buildup : Vs.t;diff_per_commit : Vs.t;}Summary of an entry contained in Def.bag_of_stat.
Properties of such a variables:
~is_linearly_increasing:false.The value_after_commit is initially fed with the value in the header (i.e. the value recorded just before the start of the play).
val bag_stat_t : bag_stat Repr.tval finds_t : finds Repr.ttype pack = {finds : finds;appended_hashes : bag_stat;appended_offsets : bag_stat;inode_add : bag_stat;inode_remove : bag_stat;inode_of_seq : bag_stat;inode_of_raw : bag_stat;inode_rec_add : bag_stat;inode_rec_remove : bag_stat;inode_to_binv : bag_stat;inode_decode_bin : bag_stat;inode_encode_bin : bag_stat;}val pack_t : pack Repr.tval tree_t : tree Repr.tval index_t : index Repr.tval gc_t : gc Repr.tval disk_t : disk Repr.tval store_t : store Repr.ttype t = {summary_timeofday : float;summary_hostname : string;curves_sample_count : int;moving_average_half_life_ratio : float;config : Def.config;hostname : string;word_size : int;timeofday : float;timestamp_wall0 : float;timestamp_cpu0 : float;elapsed_wall : float;elapsed_wall_over_blocks : Utils.curve;elapsed_cpu : float;elapsed_cpu_over_blocks : Utils.curve;op_count : int;span : Span.map;block_count : int;cpu_usage : Vs.t;index : index;pack : pack;tree : tree;gc : gc;disk : disk;store : store;}val t : t Repr.tval create_vs :
int ->
evolution_smoothing:[ `Ema of float * float | `None ] ->
scale:[ `Linear | `Log ] ->
Vs.accval create_vs_exact : int -> Vs.accval create_vs_smooth : int -> Vs.accval create_vs_smooth_log : int -> Vs.accmodule Span_folder : sig ... endAccumulator for the span field of t.
module Bag_stat_folder : sig ... endSummary computation for statistics recorded in Def.bag_of_stat.
module Store_watched_nodes_folder : sig ... endAccumulator for the store field of t.
val major_heap_top_bytes_folder :
'a Def.header_base ->
int ->
([> `Commit of 'b Def.commit_base ], Utils.Resample.acc, float list)
Utils.Parallel_folders.folderBuild a resampled curve of gc.top_heap_words
val elapsed_wall_over_blocks_folder :
'a Def.header_base ->
int ->
([> `Commit of 'b Def.commit_base ], Utils.Resample.acc, float list)
Utils.Parallel_folders.folderBuild a resampled curve of timestamps.
val elapsed_cpu_over_blocks_folder :
'a Def.header_base ->
int ->
([> `Commit of 'b Def.commit_base ], Utils.Resample.acc, float list)
Utils.Parallel_folders.folderBuild a resampled curve of timestamps.
val merge_durations_folder :
(Def.pack Def.row_base, float list, float list) Utils.Parallel_folders.folderBuild a list of all the merge durations.
val cpu_usage_folder :
'a Def.header_base ->
int ->
([> `Commit of 'b Def.commit_base ], float * float * Vs.acc, Vs.t)
Utils.Parallel_folders.folderval misc_stats_folder :
'a Def.header_base ->
([> `Commit of 'b Def.commit_base ],
float * float * int,
float * float * int)
Utils.Parallel_folders.folderSubstract the first and the last timestamps and count the number of span.
val summarise' : Def.pack Def.header_base -> int -> Def.row Seq.t -> tFold over row_seq and produce the summary.
Parallel Folders
Almost all entries in t require to independently fold over the rows of the stat trace, but we want:
All the boilerplate is hidden behind Utils.Parallel_folders, a datastructure that holds all folder functions, takes care of feeding the rows to those folders, and preseves the types.
In the code below, pf0 is the initial parallel folder, before the first accumulation. Each |+ ... statement declares a acc, accumulate, finalise triplet, i.e. a folder.
val acc : acc is the initial empty accumulation of a folder.
val accumulate : acc -> row -> acc needs to be folded over all rows of the stat trace. Calling Parallel_folders.accumulate pf row will feed row to every folders.
val finalise : acc -> v has to be applied on the final acc of a folder in order to produce the final value of that folder - which value is meant to be stored in Trace_stat_summary.t. Calling Parallel_folders.finalise pf will finalise all folders and pass their result to construct.
val summarise : ?block_count:int -> string -> tTurn a stat trace into a summary.
The number of blocks to consider may be provided in order to truncate the summary.
val save_to_json : t -> string -> unit