transactron.testing package

Submodules

transactron.testing.functions module

transactron.testing.functions.data_const_to_dict(c: data.Const[data.Layout])

transactron.testing.infrastructure module

class transactron.testing.infrastructure.PysimSimulator

Bases: Simulator

__init__(module: HasElaborate, max_cycles: float = 100000.0, add_transaction_module=True, traces_file=None, clk_period=1e-06)
run() bool

Run the simulation indefinitely.

This method advances the simulation while any critical testbenches or processes continue executing. It is equivalent to:

while self.advance():
    pass
class transactron.testing.infrastructure.SimpleTestCircuit

Bases: Elaboratable, Generic[_T_HasElaborate]

__init__(dut: _T_HasElaborate)
debug_signals()
class transactron.testing.infrastructure.TestCaseWithSimulator

Bases: object

add_mock(sim: PysimSimulator, val: MethodMock)
dependency_manager: DependencyManager
fixture_initialize_testing_env(request)
async random_wait(sim: SimulatorContext, max_cycle_cnt: int, *, min_cycle_cnt: int = 0)

Wait for a random amount of cycles in range [min_cycle_cnt, max_cycle_cnt]

async random_wait_geom(sim: SimulatorContext, prob: float = 0.5)

Wait till the first success, where there is prob probability for success in each cycle.

reinitialize_fixtures()
run_simulation(module: HasElaborate, max_cycles: float = 100000.0, add_transaction_module=True)
async tick(sim: SimulatorContext, cycle_cnt: int = 1)

Waits for the given number of cycles.

transactron.testing.input_generation module

class transactron.testing.input_generation.OpNOP

Bases: object

transactron.testing.input_generation.generate_based_on_layout(layout: amaranth.lib.data.StructLayout | collections.abc.Iterable[tuple[str, 'ShapeLike | LayoutList']]) SearchStrategy[Mapping[str, Union[int, RecordIntDict]]]
transactron.testing.input_generation.generate_method_input(args: list[tuple[str, amaranth.lib.data.StructLayout | collections.abc.Iterable[tuple[str, 'ShapeLike | LayoutList']]]]) Union[int, ForwardRef('RecordIntDict')]]]]
transactron.testing.input_generation.generate_nops_in_list(max_nops: int, generate_list: SearchStrategy[list[T]]) SearchStrategy[list[Union[T, transactron.testing.input_generation.OpNOP]]]
transactron.testing.input_generation.generate_process_input(elem_count: int, max_nops: int, layouts: list[tuple[str, amaranth.lib.data.StructLayout | collections.abc.Iterable[tuple[str, 'ShapeLike | LayoutList']]]]) OpNOP]]
transactron.testing.input_generation.generate_shrinkable_list(length: int, generator: SearchStrategy[T]) SearchStrategy[list[T]]

Trick based on https://github.com/HypothesisWorks/hypothesis/blob/ 6867da71beae0e4ed004b54b92ef7c74d0722815/hypothesis-python/src/hypothesis/stateful.py#L143

transactron.testing.input_generation.insert_nops(draw: DrawFn, max_nops: int, lst: list)

transactron.testing.logging module

transactron.testing.logging.make_logging_process(level: int, namespace_regexp: str, on_error: Callable[[], Any])
transactron.testing.logging.parse_logging_level(str: str) int

Parse the log level from a string.

The level can be either a non-negative integer or a string representation of one of the predefined levels.

Raises an exception if the level cannot be parsed.

transactron.testing.method_mock module

class transactron.testing.method_mock.MethodMock

Bases: object

__init__(adapter: ~transactron.lib.adapters.AdapterBase, function: ~typing.Callable[[...], ~typing.Optional[~collections.abc.Mapping[str, ~typing.Union[int, ~collections.abc.Mapping[str, ~typing.Union[int, RecordIntDict]]]]]], *, validate_arguments: ~typing.Optional[~typing.Callable[[...], bool]] = None, enable: ~typing.Callable[[], bool] = <function MethodMock.<lambda>>, delay: float = 0)
static effect(effect: Callable[[], None])
async effect_process(sim: SimulatorContext) None
async output_process(sim: SimulatorContext) None
async validate_arguments_process(sim: SimulatorContext) None
transactron.testing.method_mock.def_method_mock(tb_getter: Union[Callable[[], TestbenchIO], Callable[[Any], TestbenchIO]], **kwargs) Callable[[Callable[[...], Optional[Mapping[str, Union[int, Mapping[str, Union[int, RecordIntDict]]]]]]], Callable[[], MethodMock]]

Decorator function to create method mock handlers. It should be applied on a function which describes functionality which we want to invoke on method call. This function will be called on every clock cycle when the method is active, and also on combinational changes to inputs.

The decorated function can have a single argument arg, which receives the arguments passed to a method as a data.Const, or multiple named arguments, which correspond to named arguments of the method.

This decorator can be applied to function definitions or method definitions. When applied to a method definition, lambdas passed to def_method_mock need to take a self argument, which should be the first.

Mocks defined at class level or at test level are automatically discovered and don’t need to be manually added to the simulation.

Any side effects (state modification, assertions, etc.) need to be guarded using the MethodMock.effect decorator.

Make sure to defer accessing state, since decorators are evaluated eagerly during function declaration.

Parameters
tb_getterCallable[[], TestbenchIO] | Callable[[Any], TestbenchIO]

Function to get the TestbenchIO of the mocked method.

enableCallable[[], bool] | Callable[[Any], bool]

Function which decides if the method is enabled in a given clock cycle.

validate_argumentsCallable[…, bool]

Function which validates call arguments. This applies only to Adapters with with_validate_arguments set to True.

delayfloat

Simulation time delay for method mock calling. Used for synchronization between different mocks and testbench processes.

transactron.testing.profiler module

transactron.testing.profiler.profiler_process(transaction_manager: TransactionManager, profile: Profile)

transactron.testing.testbenchio module

class transactron.testing.testbenchio.CallTrigger

Bases: object

A trigger which allows to call multiple methods and sample signals.

The call() and call_try() methods on a TestbenchIO always wait at least one clock cycle. It follows that these methods can’t be used to perform calls to multiple methods in a single clock cycle. Usually this is not a problem, as different methods can be called from different simulation processes. But in cases when more control over the time when different calls happen is needed, this trigger class allows to call many methods in a single clock cycle.

__init__(sim: SimulatorContext, _calls: Iterable[amaranth.hdl._ast.Value | int | enum.Enum | amaranth.hdl._ast.ValueCastable | tuple[transactron.testing.testbenchio.TestbenchIO, Optional[dict[str, Any]]]] = ())
Parameters
sim: SimulatorContext

Amaranth simulator context.

call(tbio: TestbenchIO, data: dict[str, Any] = {}, /, **kwdata)

Call a method and sample its result.

Adds a method call to the trigger. The method result is sampled on a clock edge. If the call did not succeed, the sampled value is None.

Parameters
tbio: TestbenchIO

The method to call.

data: dict[str, Any]

Method call arguments stored in a dict.

**kwdata: Any

Method call arguments passed as keyword arguments. If keyword arguments are used, the data argument should not be provided.

sample(*values: amaranth.hdl._ast.Value | int | enum.Enum | amaranth.hdl._ast.ValueCastable | transactron.testing.testbenchio.TestbenchIO)

Sample a signal or a method result on a clock edge.

Values are sampled like in standard Amaranth TickTrigger. Sampling a method result works like call(), but the method is not called - another process can do that instead. If the method was not called, the sampled value is None.

Parameters
*values: ValueLike | TestbenchIO

Value or method to sample.

async until_done() Any

Wait until at least one of the calls succeeds.

The CallTrigger normally acts like TickTrigger, e.g. awaiting on it advances the clock to the next clock edge. It is possible that none of the calls could not be performed, for example because the called methods were not enabled. In case we only want to focus on the cycles when one of the calls succeeded, until_done can be used. This works like until() in TickTrigger.

class transactron.testing.testbenchio.TestbenchIO

Bases: Elaboratable

__init__(adapter: AdapterBase)
async call(sim: SimulatorContext, data={}, /, **kwdata) data.Const[data.StructLayout]
async call_do(sim: SimulatorContext) data.Const[data.StructLayout]
call_init(sim: SimulatorContext, data={}, /, **kwdata)
async call_result(sim: SimulatorContext) Optional[data.Const[data.StructLayout]]
async call_try(sim: SimulatorContext, data={}, /, **kwdata) Optional[data.Const[data.StructLayout]]
disable(sim: SimulatorContext)
property done
enable(sim: SimulatorContext)
get_call_result(sim: TestbenchContext) Optional[data.Const[data.StructLayout]]
get_done(sim: TestbenchContext)
get_outputs(sim: TestbenchContext) data.Const[data.StructLayout]
property outputs
sample_outputs(sim: SimulatorContext)
sample_outputs_done(sim: SimulatorContext)
sample_outputs_until_done(sim: SimulatorContext)
set_enable(sim: SimulatorContext, en)
set_inputs(sim: SimulatorContext, data)

transactron.testing.tick_count module

class transactron.testing.tick_count.TicksKey

Bases: SimpleKey[Signal]

__init__() None
transactron.testing.tick_count.make_tick_count_process()

Module contents