Class: Minitest::BenchSpec
Relationships & Source Files | |
Super Chains via Extension / Inclusion / Inheritance | |
Class Chain:
|
|
Instance Chain:
|
|
Inherits: |
Minitest::Benchmark
|
Defined in: | lib/minitest/benchmark.rb |
Overview
The spec version of Benchmark.
Class Method Summary
-
.bench(name, &block)
This is used to define a new benchmark method.
-
.bench_performance_constant(name, threshold = 0.99, &work)
Create a benchmark that verifies that the performance is constant.
-
.bench_performance_exponential(name, threshold = 0.99, &work)
Create a benchmark that verifies that the performance is exponential.
-
.bench_performance_linear(name, threshold = 0.99, &work)
Create a benchmark that verifies that the performance is linear.
-
.bench_performance_logarithmic(name, threshold = 0.99, &work)
Create a benchmark that verifies that the performance is logarithmic.
-
.bench_performance_power(name, threshold = 0.99, &work)
Create a benchmark that verifies that the performance is power.
-
.bench_range(&block)
Specifies the ranges used for benchmarking for that class.
Spec::DSL - Extended
after | Define an 'after' action. |
before | Define a 'before' action. |
it | Define an expectation with name |
let | Essentially, define an accessor for |
register_spec_type | Register a new type of spec that matches the spec's description. |
spec_type | Figure out the spec class to use based on a spec's description. |
specify | Alias for Spec::DSL#it. |
subject | Another lazy man's accessor generator. |
Benchmark - Inherited
.bench_exp | Returns a set of ranges stepped exponentially from |
.bench_linear | Returns a set of ranges stepped linearly from |
.bench_range | Specifies the ranges used for benchmarking for that class. |
Test - Inherited
.i_suck_and_my_tests_are_order_dependent! | Call this at the top of your tests when you absolutely positively need to have ordered tests. |
.make_my_diffs_pretty! | Make diffs for this Test use |
.parallelize_me! | Call this at the top of your tests when you want to run your tests in parallel. |
.runnable_methods | Returns all instance methods starting with “test_”. |
.test_order | Defines the order to run tests (:random by default). |
Guard - Extended
jruby? | Is this running on jruby? |
maglev? | Is this running on maglev? |
mri? | Is this running on mri? |
osx? | Is this running on macOS? |
rubinius? | Is this running on rubinius? |
windows? | Is this running on windows? |
Runnable - Inherited
.methods_matching | Returns all instance methods matching the pattern |
.run | Responsible for running all runnable methods in a given class, each in its own instance. |
.run_one_method | Runs a single method and has the reporter record the result. |
.runnable_methods | Each subclass of Runnable is responsible for overriding this method to return all runnable methods. |
.runnables | Returns all subclasses of Runnable. |
Instance Attribute Summary
Reportable - Included
Assertions - Included
#skipped? | Was this testcase skipped? Meant for |
Runnable - Inherited
Instance Method Summary
Benchmark - Inherited
#assert_performance | Runs the given |
#assert_performance_constant | Runs the given |
#assert_performance_exponential | Runs the given |
#assert_performance_linear | Runs the given |
#assert_performance_logarithmic | Runs the given |
#assert_performance_power | Runs the given |
#fit_error | Takes an array of x/y pairs and calculates the general R^2 value. |
#fit_exponential | To fit a functional form: y = ae^(bx). |
#fit_linear | Fits the functional form: a + bx. |
#fit_logarithmic | To fit a functional form: y = a + b*ln(x). |
#fit_power | To fit a functional form: y = ax^b. |
#sigma | Enumerates over |
#validation_for_fit | Returns a proc that calls the specified fit method and asserts that the error is within a tolerable threshold. |
Test - Inherited
#run | Runs a single test with setup/teardown hooks. |
Guard - Included
#jruby? | Is this running on jruby? |
#maglev? | Is this running on maglev? |
#mri? | Is this running on mri? |
#osx? | Is this running on macOS? |
#rubinius? | Is this running on rubinius? |
#windows? | Is this running on windows? |
Test::LifecycleHooks - Included
#after_setup | Runs before every test, after setup. |
#after_teardown | Runs after every test, after teardown. |
#before_setup | Runs before every test, before setup. |
#before_teardown | Runs after every test, before teardown. |
#setup | Runs before every test. |
#teardown | Runs after every test. |
Reportable - Included
#location | The location identifier of this test. |
#result_code | Returns “.”, “F”, or “E” based on the result of the run. |
Assertions - Included
#assert | Fails unless |
#assert_empty | Fails unless |
#assert_equal | Fails unless |
#assert_in_delta | For comparing Floats. |
#assert_in_epsilon | For comparing Floats. |
#assert_includes | Fails unless |
#assert_instance_of | Fails unless |
#assert_kind_of | Fails unless |
#assert_match | Fails unless |
#assert_mock | Assert that the mock verifies correctly. |
#assert_nil | Fails unless |
#assert_operator | For testing with binary operators. |
#assert_output | Fails if stdout or stderr do not output the expected results. |
#assert_path_exists | Fails unless |
#assert_predicate | For testing with predicates. |
#assert_raises | Fails unless the block raises one of |
#assert_respond_to | Fails unless |
#assert_same | Fails unless |
#assert_send |
|
#assert_silent | Fails if the block outputs anything to stderr or stdout. |
#assert_throws | Fails unless the block throws |
#capture_io | Captures $stdout and $stderr into strings: |
#capture_subprocess_io | Captures $stdout and $stderr into strings, using Tempfile to ensure that subprocess IO is captured as well. |
#diff | Returns a diff between |
#exception_details | Returns details for exception |
#fail_after | Fails after a given date (in the local time zone). |
#flunk | Fails with |
#message | Returns a proc that will output |
#mu_pp | This returns a human-readable version of |
#mu_pp_for_diff | This returns a diff-able more human-readable version of |
#pass | used for counting assertions. |
#refute | Fails if |
#refute_empty | Fails if |
#refute_equal | Fails if |
#refute_in_delta | For comparing Floats. |
#refute_in_epsilon | For comparing Floats. |
#refute_includes | Fails if |
#refute_instance_of | Fails if |
#refute_kind_of | Fails if |
#refute_match | Fails if |
#refute_nil | Fails if |
#refute_operator | Fails if |
#refute_path_exists | Fails if |
#refute_predicate | For testing with predicates. |
#refute_respond_to | Fails if |
#refute_same | Fails if |
#skip | Skips the current run. |
#skip_until | Skips the current run until a given date (in the local time zone). |
#things_to_diff | Returns things to diff [expect, butwas], or [nil, nil] if nothing to diff. |
Runnable - Inherited
#result_code | Returns a single character string to print based on the result of the run. |
#run | Runs a single method. |
Class Method Details
.bench(name, &block)
This is used to define a new benchmark method. You usually don't use this directly and is intended for those needing to write new performance curve fits (eg: you need a specific polynomial fit).
See .bench_performance_linear for an example of how to use this.
# File 'lib/minitest/benchmark.rb', line 357
def self.bench name, &block define_method "bench_#{name.gsub(/\W+/, "_")}", &block end
.bench_performance_constant(name, threshold = 0.99, &work)
Create a benchmark that verifies that the performance is constant.
describe "my class Bench" do
bench_performance_constant "zoom_algorithm!" do |n|
@obj.zoom_algorithm!(n)
end
end
# File 'lib/minitest/benchmark.rb', line 401
def self.bench_performance_constant name, threshold = 0.99, &work bench name do assert_performance_constant threshold, &work end end
.bench_performance_exponential(name, threshold = 0.99, &work)
Create a benchmark that verifies that the performance is exponential.
describe "my class Bench" do
bench_performance_exponential "algorithm" do |n|
@obj.algorithm(n)
end
end
# File 'lib/minitest/benchmark.rb', line 416
def self.bench_performance_exponential name, threshold = 0.99, &work bench name do assert_performance_exponential threshold, &work end end
.bench_performance_linear(name, threshold = 0.99, &work)
Create a benchmark that verifies that the performance is linear.
describe "my class Bench" do
bench_performance_linear "fast_algorithm", 0.9999 do |n|
@obj.fast_algorithm(n)
end
end
# File 'lib/minitest/benchmark.rb', line 386
def self.bench_performance_linear name, threshold = 0.99, &work bench name do assert_performance_linear threshold, &work end end
.bench_performance_logarithmic(name, threshold = 0.99, &work)
Create a benchmark that verifies that the performance is logarithmic.
describe "my class Bench" do
bench_performance_logarithmic "algorithm" do |n|
@obj.algorithm(n)
end
end
# File 'lib/minitest/benchmark.rb', line 432
def self.bench_performance_logarithmic name, threshold = 0.99, &work bench name do assert_performance_logarithmic threshold, &work end end
.bench_performance_power(name, threshold = 0.99, &work)
Create a benchmark that verifies that the performance is power.
describe "my class Bench" do
bench_performance_power "algorithm" do |n|
@obj.algorithm(n)
end
end
# File 'lib/minitest/benchmark.rb', line 447
def self.bench_performance_power name, threshold = 0.99, &work bench name do assert_performance_power threshold, &work end end
.bench_range(&block)
Specifies the ranges used for benchmarking for that class.
bench_range do
bench_exp(2, 16, 2)
end
See Minitest::Benchmark#bench_range
for more details.
# File 'lib/minitest/benchmark.rb', line 370
def self.bench_range &block return super unless block = (class << self; self; end) .send :define_method, "bench_range", &block end