Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Krzysztof Wozniak
    @Hejvi
    Hmm thanks, I will try to do that
    Krzysztof Wozniak
    @Hejvi
    Hello, @jenisys one more time I am looking for a solution to a problem, hope so you are not annoyed by my questions :| Is it any way to get, from the context, the step name which is inside the execute_steps()? The context.scenario.steps[index].name shows only the name of the step which contains code with execute_steps() but I cannot find a solution to get the name of the step which is inside the execution_steps()
    Krzysztof Wozniak
    @Hejvi

    Solved, if someone will have a similar problem:
    Someone gave me a solution:

    def before_step(context, step):
        context.step = step

    Then use inside the step: context.step.name

    jenisys
    @jenisys
    @Hejvi
    Nice solution.
    Markella Efthymiou
    @marmiou

    Hello,
    I m new to Behave.. I would like to ask you if it is possible to use Page Object Model in Behave..Let's say I want to have a class Login that extends the main Page class. Is it possible to do that?

    class Login(Page):
        def __init__(self, browser):
            super(Page, self).__init__("browser")
    
        @given(u'user is in the login page')
        def login_page_loads(self):
            super(Page, self).verify_page_loads(self.browser, "http://calories-calc.herokuapp.com/")

    Will the test read the methods that are inside the class object? Where can I instantiate the login object?

    jenisys
    @jenisys
    Behave steps are normally not usable at Class-Methods, only a functions. You create a login_page = Login() object and optionally store it in the Context object of the step. Then you call login_page.login_page_loads(). Note that you should normally use self.verify_page_loads() (instead of using super; note that correct usage of super would use: super(Login, self).verify...() unless you want to call the base-class of the class Page).
    Jake
    @jacobsorme
    Hello all, hope you are well. Quick question: is Tag Matching with Tag-Expressions (e.g. tag.*) being added in 1.2.7?
    jenisys
    @jenisys
    Tag matching is already contained in the current tip of the master branch on the Github repository. Therefore, it will be contained in the next release.
    Jake
    @jacobsorme
    I see, thanks. Looking forward to using it.
    Jake
    @jacobsorme
    Just out of curiosity, when will the next release be?
    Daniel Holmes
    @jaitaiwan_gitlab
    Hey folks, in the behave tutorial from the doco there is a "fail" function mentioned. Where's that function coming from?
    Krzysztof Wozniak
    @Hejvi
    Hello, I am looking for a way to insert, in a fly, pair of key:value to the context. Let's say that... I have dynamically created string but I want to add it to the context as a key and add to it a value. Is it possible?
    jenisys
    @jenisys
    @Hejvi Have you tried setattr(context, name, value) ?
    jenisys
    @jenisys
    @jaitaiwan_gitlab Related to fail() function: There is no such function in behave. A simple implementation would be: def fail(message): assert False, message
    Krzysztof Wozniak
    @Hejvi
    @jenisys Yes, thanks you!
    Masud Rana
    @masudr4n4
    when using allure reports I can not see any log information on a testcast its only steps name and perameter ,how can i see logs too for a testcase on allure reports any one have idea ?
    andrea
    @andrearx
    Hi all, I've implemented behave but it works only 'before_all' and not fires nothing into steps... how can I do? More details in my stackoverflow question if someone can help me... thanks https://stackoverflow.com/questions/67303132/selenium-behave-debugging-with-vscode-python
    jenisys
    @jenisys
    When no features are found, how can any step be executed? Check why your features are not discovered.
    andrea
    @andrearx
    @jenisys exactly, if I put a breakpoint into a step file it fires (on loading file), on feature unfortunately I can't do the same test... what should I check for find the problem? thanks for helping
    jenisys
    @jenisys
    For example, the current working directory is not the one that you expect. Put a print() statement in the before_all() hook or inspect it in the debugger in the before_all() breakpoint.
    The third arg in the VSCode configuration points to the step directory, use the features directory
    andrea
    @andrearx
    @jenisys thank you!! I resolved my problem with your tips! :) have a nice day
    Peter Bittner
    @bittner
    Note to everyone who cloned repositories from @behave-contrib: I changed to default branch (from master) to main (following our industry's attempt to use more inclusive language). You may need to update the local configuration of your Git remotes. :collision: :computer:
    Karl Hudgell
    @karl0ss
    Morning guys, can someone tell me, is there anyway to remove the @1.x from scenario outline executions? as this really messes up "grouping" when trying to use reporting, mainly allure with every "loop" being displayed as seperate scenarios rather than being grouped?....thanks
    jenisys
    @jenisys
    @Kurt I assume you mean the „@{row.id}“ part placeholder in the scenario_outline_annotation_schema that extends the name/title of the Scenario ?!?
    Just reconfigure the „scenario_outline_annotation_schema“ in the „behave.ini“ config-file: Remove the placeholder from the schema. SEE: https://behave.readthedocs.io/en/latest/new_and_noteworthy_v1.2.5.html#scenario-outline-improvements
    Karl Hudgell
    @karl0ss
    ah, yes that is what i mean...
    i will have a look at this, tahnk you for your response..
    Krzysztof Wozniak
    @Hejvi
    Hello, someone knows how to add custom parameters using behave into the allure report? I need to attach some calculations from step to Parameters in the scenario report.
    Stanislav Seliverstov
    @sseliverstov
    Connor Philip
    @connor-philip
    Hello! I was looking to add a feature to behave, the ability to add a delay/backoff to the scenario_run_with_retries function. I've added what I wanted, ran it against a demo project of mine to test it. However when trying to run the behave tests within the behave repo itself most of them are failing. Is there a contributing guide or any information on how to setup the environment properly for these tests to pass? I done the standard installs I fonud in the travis file but no luck :/
    Of course I'd find it right after I ask! https://github.com/behave/behave.example
    Thanks anyway!
    Krzysztof Wozniak
    @Hejvi
    @sseliverstov Thanks again, I guess it would be better if we continue the discussion on only one gitter so let's stay on allure-core ;)
    Ganapati Bhat
    @Ganapati21795
    Hey I just need a small help is it possible I can take the csv values into the python behave file
    ??
    jenisys
    @jenisys
    @Ganapati21795 You need to be more specific what you are trying to achieve. Where do want to read the CSV file? In a step implementation, in a hook, somewhere in Python code ?!?
    Prakhar Gupta
    @prakhar.gupta_gitlab
    Hey guys, I'm using behave for writing integration tests, and encountered an issue which is causing my suite to take more memory as new cases are added. I tried to print context and it is retaining all the variables created during feature execution. Is there a way to manually clean up context? Or am I missing something here 😅
    Prakhar Gupta
    @prakhar.gupta_gitlab
    @jenisys Can you please help me, if possible?
    jenisys
    @jenisys
    @prakhar.gupta_gitlab The Context object is a layered object. What you add at the beginning of the feature remains there until the end of the feature. Same for begin/end of the test run, Scenario, etc.
    Prakhar Gupta
    @prakhar.gupta_gitlab
    Yes, but in my case, it is retaining data till the end of the suite, causing it to use 1200+ MB for just 200 cases.
    jenisys
    @jenisys
    @prakhar.gupta_gitlab Then you probably added it (or the anchor object) in the before_all() hook
    Prakhar Gupta
    @prakhar.gupta_gitlab
    ok, let me try, thanks @jenisys
    Prakhar Gupta
    @prakhar.gupta_gitlab
    @jenisys no luck, had defined an event_client in before_all, moved that block of code to before_feature, still, it's retaining the data in context. Here is the pseudo-code of my before_all
    @async_run_until_complete
    async def before_all(context: Context):
        context.config.setup_logging()
        logging_handler.init_logger_handler(log_level=LoggingConfig.LOG_LEVEL)
    
        if "event_client" not in context:
            context.event_client = EventManager()
    
        await context.event_client.init_event_client()
    
        if "some_variable" not in context:
            context.some_variable = int(MyTestConfig.SOME_VARIABLE)
    jenisys
    @jenisys
    @prakhar.gupta_gitlab If you move the initialisation of „event_client“ to the „before_feature()“, the attribute is removed from the context after the feature ends. That does not mean that the EventManager class performs any cleanup. If you need that then you should register a cleanup function (or look at fixtures). In addition, the EventManager or the framework behind may keep its resources. Note that the async execution of the hook does not really help (or improves the runtime) because the hook needs to be executed in a synchronous way (you need to know when the hook is done or if an error / exception occurs).
    jenisys
    @jenisys
    Python only performs object cleanup when the ref-count is reduced to zero. Therefore, if you hold to your „event_client“ somewhere else in your source code, this will keep the object alive (for example).
    Prakhar Gupta
    @prakhar.gupta_gitlab

    @jenisys Tried doing

    async def event_client_fixture(context: Context):
        context.event_client = EventManager()
        print("Reached in Fixture!!!")
        yield await context.event_client.init_event_client()
    
        print("Exiting the Fixture!!!")
    def before_feature(context: Context, feature):
        event_loop = asyncio.get_event_loop()
        event_loop.run_until_complete(use_fixture(event_client_fixture, context))

    But we are getting this error

    HOOK-ERROR in before_feature: TypeError: An asyncio.Future, a coroutine or an awaitable is required

    jenisys
    @jenisys
    A coroutine is not a normal function. A coroutine can only be called within an async event-loop. You should know that if you are using async-function(s)/coroutine(s). The annotation „@async_run_until_complete“ does that under the hoods.
    Prakhar Gupta
    @prakhar.gupta_gitlab

    @jenisys Tried that previously as well, still we were getting the same error

    Try 1:

    @fixture
    async def event_client_fixture(context: Context):
        context.event_client = EventManager()
        print("Reached in Fixture!!!")
        yield await context.event_client.init_event_client()
    
        print("Exiting the Fixture!!!")
        # await context.event_client.close_event_client()
    @async_run_until_complete
    async def before_feature(context: Context, feature):
        await use_fixture(event_client_fixture, context)

    Got this error:
    HOOK-ERROR in before_feature: TypeError: object async_generator can't be used in 'await' expression

    Try 2:

    @fixture
    @async_run_until_complete
    async def event_client_fixture(context: Context):
        context.event_client = EventManager()
        print("Reached in Fixture!!!")
        yield await context.event_client.init_event_client()
    
        print("Exiting the Fixture!!!")
        # await context.event_client.close_event_client()
    @async_run_until_complete
    async def before_feature(context: Context, feature):
        await use_fixture(event_client_fixture, context)

    Got this error:
    HOOK-ERROR in before_feature: TypeError: An asyncio.Future, a coroutine or an awaitable is required

    At this point we are getting desperate, if possible can you please hop on a call with us?

    jenisys
    @jenisys
    @prakhar.gupta_gitlab
    Note that async support was only delivered for async step functions (by intention); same for the @async_run_until_complete function decorator.
    You are now using it by hooks and fixtures. The problems you are running into are basic asyncio problems: You cannot call a async-function/coroutine directly, you must call it via an async event-loop.
    And you probably should not combine „yield await“ in the fixture. Put them on separate lines. And do not use await use_fixture(…) in the before_feature() hook.