Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Bryn Pickering
    @brynpickering
    @alexsescu if you set the capacity by using force_resource and resource=... then you shouldn't have to worry about what value you've set for energy_capacity_max etc., because the technology will just use all the available resource (hence why the energy capacity constraints are ignored in the optimisation).
    Bryn Pickering
    @brynpickering
    @abart89 you're approach is exactly what I would do in order to make energy_cap_min work in the way you'd like. I've jsut checked the constraints, and everything is in place as I'd expect. Can you check (looking at model.inputs) that there are no location specific overrides being applied in some places, without you realising? You can check something like model.get_formatted_array('energy_cap_min').loc[{'techs': 'chiller_electric_large'}]
    Francesco Lombardi
    @FLomb
    Hi guys, quick question. If I am trying to make changes to the code on a branch I created, how can I "run" the branch instead of the master?
    GiorgioBalestrieri
    @GiorgioBalestrieri

    @FLomb based on the fact that there is a setup file, it should be pip-installable. You can install directly from GitHub (or any other git remote repo) using pip install git+<clone link to the repo>, you should be able to get the clone link to a specific branch.

    Otherwise, if you have it locally, you can just cd into the project folder and run pip install -e ./ (-e means it will be updated each time you update the source code, it's convenient if you are actively developing your version)

    A good idea might be to create a separate conda environment for that, if you are relying on Anaconda
    Francesco Lombardi
    @FLomb
    thanks for the hint, I tried to install it based on my local copy, but it doesn't seem to work properly. Maybe 'cause I'm managing multiple branches in the same folder through GitHub Desktop and only the master gets recognised when using pip install?
    GiorgioBalestrieri
    @GiorgioBalestrieri
    @FLomb if you use pip install in the local folder, it should install whatever branch you have checked out, so just make sure you checkout the branch you want to use. Also make sure that you don't have calliope already installed through conda or something like that, and remember to add *.egg-info/ to your .gitignore
    Francesco Lombardi
    @FLomb
    @GiorgioBalestrieri of course I have calliope already installed through conda, but I created a new environment for the clean "branch" installation. The thing is that it didn't install the branch, even though of course I checked out having the correct branch displayed in the local folder before installing.
    Francesco Lombardi
    @FLomb
    Hi guys, I'm moving from single-node to multi-node and I'm getting weird warnings like:
    • Not building the link CNOR,CSUD because one or both of its locations have been removed from the model by setting exists: false
    which apparently are also the cause of a subsequent Python KeyError
    is there something very basic that I'm missing to make locations "exist"?
    apparently the problem only occurs for some random locations. But all of them are defined in the same way
    Bryn Pickering
    @brynpickering
    It could be that one of CNOR or CSUD hasn't actually been defined in the model, so Calliope thinks that the user has deleted the location (whereas it just hasn't been defined). If those locations are definitely in the model, you could try explicitly setting them to exist (which is the default anyway), e.g. locations.CNOR.exists: true
    Francesco Lombardi
    @FLomb
    Thanks Bryn, I managed to fix it, there was a typo in a location definition. Now it works, but still there is something weird happening: everything works fine if running in "planning" mode, but if trying to run in "operational" mode (as I normally always do) it returns an error saying that I need to specify I finite capacity on each of the defined transmission lines. Which of course I did define, using finite numbers (which just work as expected in planning mode). Any hints? It's possible that there is still something wrong in how I defined such techs (considering that I used to work on a single-node model before), but it is quite weird that I have no problem if running in planning mode.
    a finite capacity*
    Bryn Pickering
    @brynpickering
    Operation mode is quite rigid in its requirements, since it is so easy to define an optimisation problem which is unphysical. All capacity maximum values are set as the decision variable value, instead of acting as a maximum. So any infinite capacity won't work. The easiest thing here is to check model.inputs.energy_cap_max and/or model.inputs.energy_cap_equals to see if there is a rogue infinite or NaN value. NaN values will default to inf (as that is the default for energy_cap_max)
    Francesco Lombardi
    @FLomb
    yes, in fact (in line with the warning and error) the energy_cap of transmission lines (and demands also) is set to "nan". The question is: why? I have defined finite capacity numbers
    Also, making the same test in plan mode, transmission lines are no more set to "nan". They are given the specified capacity limit
    there's just something that makes he remove cap_equals constraints on tramission lines in operation mode
    it*
    Bryn Pickering
    @brynpickering
    demands shouldn't be a problem, since capacity doesn't really make sense for them. For transmission lines, this reminds me of a problem that came up before. Are you setting the transmission capacities at a technology or a link level?
    Francesco Lombardi
    @FLomb
    well, for the "big" transmission lines between bidding zones I'm setting the capacities at link level. Conversely, for the "small" lines that are conceived as free transmission I am setting a capacity constraint at tech level, the same for all of them, and it's like a big number because I originally wanted it to be "inf", but then I put like 10GW to try overcome this error, without success
    I think the latter lines are those giving the problem
    Bryn Pickering
    @brynpickering
    There's no reason for the inputs dataset to have different values for different solving modes, since there is nothing in the code that changes until after the model is sent to the solver. Are you sure that model.inputs gives you two different arrays between loading the model with plan vs operate(i.e. without ever telling the model to run the optimisation)?
    Francesco Lombardi
    @FLomb
    yes, absolutely. Anyway, I just tried to follow your hint and specify capacities at link level; also, I realised I was using energy_cap_max instead of energy_cap_equals for the "big" lineas (even if Calliope should automatically set that to equals if running in operation mode, shouldn't it?). Well, after these two changes, it is happy to solve the model. The only thing that is still problematic is that it does the following: Resource capacity constraint removed from SICI::hydro_dam as force_resource is applied for every single supply_plus tech I have, in every region, even if for most of them I specifically stated "forse_resource: False".
    lines*
    accordingly, results vary between plan and operate because supply_plus are free to supply much more than they should
    this is something I used to have also in single-node
    and that I avoided by applying some further tricks on those techs I wanted to keep under control in terms of resource (i.e. applying also a resource_cap_equals constraint, which I cannot do anymore 'cause I'm using timeseries for these techs now).
    well, I guess I could still hack it in a similar way by applying twice the timeseries to both "resource" and "resource_cap_equals", but it's not super elegant to see
    Francesco Lombardi
    @FLomb
    Ok, I managed to keep it under control
    Francesco Lombardi
    @FLomb
    Hi guys, urgent issue: sometimes ago we were discussing about having variable COPs for heat pumps in a Calliope model, and @brynpickering said this is currently already possible by simply assigning a csv file with a time-step-dependent COP. Now, has anybody actually tried that? Because I get a quite explicit error saying that "ValueError: can only convert an array of size 1 to a Python scalar". Which means, I think, that the code is conceived to take only a scalar for the "efficiency" constraint of a conversion_plus.
    some time ago*
    Francesco Lombardi
    @FLomb
    even though, looking at Calliope's code it seems indeed conceived for taking time-step-dependent efficiencies: (this is from the conversion_plus constraints code) energy_eff = get_param(backend_model, 'energy_eff', (loc_tech, timestep))
    and also the documentation reports that this should be possible without problems
    Francesco Lombardi
    @FLomb
    could there be a bug? Should I open an issue?
    Bryn Pickering
    @brynpickering
    It doesn't look like it's a problem with it accepting timeseries per se
    I would open an issue with the full error log, then we can dig into it some more. Can you confirm that this also happens when assigning a timeseries to energy_eff for the urban scale model chp?
    Francesco Lombardi
    @FLomb
    uhm, actually it doesn't. I tried to put a timeseries for that and it works without error. The output seems reasonable (I put a timeseries with efficiency of 0.2 for every time step and the result is that it doesn't choose to use the chp at all anymore)
    but let me try to put something greater than 1 as for HPs
    nope, it works pretty well
    so what could make it unhappy with my case?
    Francesco Lombardi
    @FLomb
    @brynpickering updates about this: I was wondering, what's different between my model and the urban_scale one? the run mode. I always use "operate", and many times when I have problems they disappear switching to "plan". The same happens for this issue with the time series. Is this helpful?
    Bryn Pickering
    @brynpickering
    Hi @FLomb, the issue is in checks we make before things go to pyomo, which you should be able to see in the error traceback. Not sure when a solution to this will be pushed to master, but for your local copy, you can just update line 47 of calliope/backend/checks.py to if _is_in(loc_tech, var) and not any((pd.isnull((model_data[var].loc[loc_tech].values, )),)):
    Francesco Lombardi
    @FLomb
    Hi guys, I know I still have to update about previous issues, but we're hugely overloaded in these days. There's a funny thing happening with a calliope_3.6 installation: we installed it on 2 different laptops and it just works fine, providing the same output for the same script and model formulation, just great. Now, when installing exactly the same thing on our workstation (same OS, more or less, same Conda version etc.), the same sript/model.yaml work (i.e. no errors raised by Calliope, model solved) but there's a weird bug. The bug is that results (if displayed with the calliope.get_formatted_array func) are very different from those obtained if saving .to_csv. In particular, the latter are just ok and identical to those obtained on the other two laptops. The ones displayed with the calliope API fail to account for the fact that the solar resource is multiplied by an energy_cap and area, so it's like totally negligible (0.X instead of XXX kW). As a consequence, we also fail to have storage (no VRES to be stored), so it's not just a matter of plotting or something like that. Any idea??
    Bryn Pickering
    @brynpickering
    As an input parameter, the resource variable in xarray that you see is not multiplied by anything like resource_area or energy_cap (since those are decision variables which are not known when you load the model inputs). It would also only be multiplied by one of resource_area or energy_cap, depending on whether you have set your resource_unit as energy_per_area or energy_per_cap, respectively. It isn't really possible for results to be different between the xarray dataset and the CSV files, since the latter is saved directly from the former, so you should see that energy_cap values, for instance, are the same in both. If results are significantly different to your other devices, I would check the versions of all your packages (and your solver, particularly if it is GLPK) and that you are definitely using calliope 0.6.3 (the current stable release), and not the current master branch development version (calliope.__version__ should say 0.6.3).
    Francesco Lombardi
    @FLomb
    thanks Bryn, of course I know that I have to pay attention to the resource_unit, but that was not the problem really. As strange as it may seem - I agree and understand it is virtually impossible that csv and get_formatted_array provide different things - this is what was happening. Same model instance, one run, got solved (CPLEX, btw). Then I did model.to_csv and model.get_formatted_array: having different timeseries for the variables that I previously mentioned. And the version of calliope is the same between the working station and at least the second laptopt (not mine, which has a dev version). But: we are realising that the issue is probably with the working station and with its version of Anaconda (the latest one) which is also giving tons of problems with other open-source models that we have and which are based on the same libraries and solvers. It looks like there's a retrocompatibility issue with things developed in Python 3.6, even whereas you specifically tell conda to create environments based on Py 3.6. It is possible that something gets corrupted in the installation. I will try to reinstall a safer conda version on the working station
    Francesco Lombardi
    @FLomb
    Ok, a fresh new installation solved all problems
    now, something instead that is probably more interesting for everybody. When modelling thermal storage in a standard way (i.e. 0.1-0.2% losses per timestep, efficiency = 1, small or no cost of operation), it happens that the model finds optimal to keep the storage in full charge even in long periods of non-use (e.g. summer for space heating). Any hints about how to easily avoid that (apart from putting some hard constraints into the code)? I was guessing putting some penalisation to the storage operation would be enough, but it looks like it's not enough. I think I also tried penalising a little bit the efficiency, but not sure if I dreamed of it or actually did it. Thanks as usual
    Bryn Pickering
    @brynpickering
    How have you modelled your storage over the year, i.e. is it assuming cyclic storage (storage at time t(T) = storage just before time t(0))? It isn't too strange in a multi-energy system for the storage to be used as a way of getting rid of excess energy, by storing it when it is abundant and letting it dissipate due to standing losses while there is less demand. You could try adding a hard constraint for non-summer use, or have the storage technology separated from the system using conversion technologies. If those conversion techs have 0% efficiency in the summer then they won't be able to provide the storage with any energy... Although I would say it is always better practice to try and work out why the storage system operating that way proves to be the cheapest way of running the system ;)