Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Nov 12 15:44
    ankurankan commented #1184
  • Nov 12 06:43
    kiennguyen1101 commented #1184
  • Nov 11 09:25
    ankurankan labeled #1184
  • Nov 11 09:13
    ankurankan commented #1184
  • Nov 11 04:55
    kiennguyen1101 edited #1184
  • Nov 11 04:53
    kiennguyen1101 opened #1184
  • Nov 09 11:04

    ankurankan on v0.1.9

    (compare)

  • Nov 09 10:59
    codecov[bot] commented #1183
  • Nov 09 10:57

    ankurankan on dev

    Release v0.1.9 (compare)

  • Nov 09 10:57
    ankurankan closed #1183
  • Nov 09 10:57
    codecov[bot] commented #1183
  • Nov 09 10:56
    codecov[bot] commented #1183
  • Nov 09 10:44
    ankurankan opened #1183
  • Nov 08 15:44
    ankurankan opened #1182
  • Nov 08 15:05
    ankurankan commented #1141
  • Nov 08 14:11

    ankurankan on dev

    Minor changes in the error mess… (compare)

  • Nov 08 14:11
    ankurankan closed #1181
  • Nov 08 14:01
    ankurankan opened #1181
  • Nov 08 01:02
    ghtie closed #1180
  • Nov 05 19:00
    Kwanikaze commented #1141
Ankur Ankan
@ankurankan
@karnatapu Yes 0.1.7 is the latest. I am not sure about the cache implementation, it should have been there but doesn't seem to be. I will check and get back to you.
Ankur Ankan
@ankurankan
@karnatapu Thanks for pointing out the PR. I had totally forgotten about it. I will check if that can be merged.
Mark McKenzie
@mrkmcknz
Evening everyone. I was wondering what the most logical method is to get a list of value labels for a node. Kind of like what you see in the get_cpds() output.
Mark McKenzie
@mrkmcknz
I was half expecting it to be .variables
Ankur Ankan
@ankurankan
@mrkmcknz I am not sure what you exactly mean by value labels. Could you please elaborate?
Mark McKenzie
@mrkmcknz
+-------------------------------------+-----------+
| project_type(Fast Track Onboarding) | 0.0299222 |
+-------------------------------------+-----------+
| project_type(Innovation)            | 0.0113704 |
+-------------------------------------+-----------+
| project_type(governance)            | 0.0388989 |
+-------------------------------------+-----------+
| project_type(innovation)            | 0.032316  |
+-------------------------------------+-----------+
| project_type(other)                 | 0.831837  |
+-------------------------------------+-----------+
| project_type(performance)           | 0.0359066 |
+-------------------------------------+-----------+
| project_type(productivity)          | 0.0197487 |
+-------------------------------------+-----------+
In this example I want to get Fast Track Onboarding, Innovation...
Ankur Ankan
@ankurankan
@mrkmcknz You can use the .state_names attribute. It should return a dict of state names of all the variables.
Mark McKenzie
@mrkmcknz
Thanks @ankurankan I managed to find it last night. I'm just working on this pgmpy/pgmpy#913 I might create a PR for my implementation at some point when I clean the code up
Mark McKenzie
@mrkmcknz
I see a lot of comments in various places re continuous/discrete hybrid models in the pipelines. I was wondering what progress has been made towards this.
Ankur Ankan
@ankurankan
@mrkmcknz I am currently working on a Data class which will implement different conditional independence testing algorithms (for both continuous and hybrid). And with minor changes in the current structure learning algorithms, they should be able to learn the structure from continuous and hybrid datasets. But I don't think I will have the bandwidth to work on parameter learning or inference on continuous models soon.
5991dream
@5991dream
hello,I want to learn the PGM entry information. Do you have any recommendations?
5991dream
@5991dream
I am a newbie
pengjunli
@pengjunli
Hi guys, I am a starter. I want to use DBN of pgm, but I hava not found documentation on parameter learning, structure learning and inference for DBN. Any demo or documentation about DBN? Thanks for help.
Clemens Harten
@clemensharten_twitter
Hey everyone, I am just getting started with pgmpy, and have a question regarding performance of inference via BeliefPropagation. I have setup a moderate Bayesian Network (about 40 nodes, 80 edges), and want to get the state probabilities for a central node (without providing any evidence). With the recent dev-branch, this operation will take about 4-5 minutes on my developer-machine (i7, 8GB RAM ...) I would have thought it to be faster. Am I hitting any limits here (exponential runtime growths?), or should this indeed be faster, and I am doing something wrong? Any help much appreciated!
Clemens Harten
@clemensharten_twitter
... so, just for the record: BeliefPropagation is implemented as an exact algorithm. For my use case, I need an approximation, and BayesianModelSampling gives me exactly what I need :).
Ankur Ankan
@ankurankan
@clemensharten_twitter Yes, it's slow because it finds the exact solution. VariableElimination should be faster for exact solutions.
Yujian Liu
@yujianll
Hi everyone, I wonder what's the correct way to build a Bayesian Network from an undirected graph (I have a list of undirected edges, and I just want to add directionality to those edges).
Yujian Liu
@yujianll
Does anyone know if there is a function that returns the separating_sets given a specific undirected graph, or how can I do that manually?
jonvaljohn
@jonvaljohn
Hi, just installed pgmpy on the Mac using the latest code from the dev branch. When I run "nosetests -v", one test fails, is that expected?
Ankur Ankan
@ankurankan
@jonvaljohn Not really. Is it TestIVEstimator by any chance?
jonvaljohn
@jonvaljohn

FAIL: test_sampling (pgmpy.tests.test_sampling.test_continuous_sampling.TestNUTSInference)


Traceback (most recent call last):
File "/Users/jonvaljohn/Code/pgmpy/pgmpy/pgmpy/tests/test_sampling/test_continuous_sampling.py", line 208, in test_sampling
np.linalg.norm(sample_covariance - self.test_model.covariance) < 0.4
AssertionError: False is not true


Ran 676 tests in 146.484s

FAILED (SKIP=7, failures=1)

jonvaljohn
@jonvaljohn
@ankurankan, this is the error I am getting.
Ankur Ankan
@ankurankan
@jonvaljohn Hmm, it's working fine on my machine. Could you tell me your python version and dependency packages' versions?
jonvaljohn
@jonvaljohn
Python 3.7
How do I find the dependency packages' versions?
Thanks for all your help.
Ankur Ankan
@ankurankan
@jonvaljohn You can run: pip freeze | grep -E '(numpy|scipy|pandas|networkx)=='
jonvaljohn
@jonvaljohn
networkx==2.2
numpy==1.16.2
pandas==0.24.2
scipy==1.2.1
jonvaljohn
@jonvaljohn
Has anyone used dynamic bayesian network in pgmpy? The provided code is implementing a 2-TBN but I dont see a way to iterate with time. What is the best practice to provide new values for the CPDs for the "0" parameters. I assume that the "1" parameters should be automatically updated.
Ankur Ankan
@ankurankan
@jonvaljohn You can't use Variable Elimination for DBN with the current implementation. Try using pgmpy.inference.DBNInference.
jonvaljohn
@jonvaljohn
Thanks so much. Let me try... if I can get all this to work, I could contribute a DBN usage notebook so others can leverage this.
jonvaljohn
@jonvaljohn
from pgmpy.factors.discrete import TabularCPD
from pgmpy.models import DynamicBayesianNetwork as DBN
from pgmpy.inference import DBNInference


dbn = DBN() 
#dbn.add_edges_from([(('M',0), ('S', 0)), (('S',0), ('S', 1)), (('S', 0), ('J', 0)) ])
dbn.add_edges_from([(('M',1), ('S', 1)), (('S', 1), ('J', 1)), (('S',0), ('S', 1)) ])
#S_cpd = TabularCPD (('S',0), 2, [[0.5, 0.5]]) # Prior the state should go 50/50 
M_cpd = TabularCPD (('M',1), 2, [[0.5, 0.5]]) # Prior the model should go 50/50 


M_S_S_cpd = TabularCPD(variable = ('S',1), variable_card=2, 
                       values = [[0.95, 0.3, 0.1, 0.05],
                                 [0.05, 0.7, 0.9, 0.95]],
                     evidence=[ ('M', 1), ('S', 0)],
                     evidence_card=[2, 2])


S_J_cpd = TabularCPD(('J',1), 2, [[0.9, 0.1],
                                   [0.1, 0.9]],
                     evidence=[('S',1)],
                     evidence_card=[2])
dbn.add_cpds(M_cpd, M_S_S_cpd, S_J_cpd)
dbn_inf = DBNInference(dbn)
dbn_inf.forward_inference([('J', 2)], { ('M', 1):0, ('M', 2):0, ('S', 0): 0})

It is still failing... if I have the variable that goes from state to state as the parent in the individual time slice then the code works otherwise it fails.

Now this gives a "key error"

Ankur Ankan
@ankurankan
@jonvaljohn Sorry for the late reply. Yes, the current code makes the assumption that the model structure remains the same in each time slice. Are you trying to have a different model structure?
Yujian Liu
@yujianll
Hi, it looks like reduce() function in tabularCPD and DiscreteFactor takes (var_name, var_state_name) as input. I wonder is there a way to pass (var_name, var_state_no) as input.
Because this for loop in pre_compute_reduce() Sampling.py seems to use (var_name, var_state_no) as input:
for state_combination in itertools.product( *[range(self.cardinality[var]) for var in variable_evid] ): states = list(zip(variable_evid, state_combination)) cached_values[state_combination] = variable_cpd.reduce( states, inplace=False ).values
Sandeep Narayanaswami
@sandeep-n
Hey folks!
I'm using pgmpy in a project, and needed to fit a LinearGaussianBayesianNetwork to a dataset. Since the .fit() method isn't yet implemented, I wrote my own using sklearn's LinearRegression.
Would the maintainers be interested in a PR with this implementation? Note that it would be introducing a dependency on sklearn. Or would you prefer an implementation with scipy/statsmodels?
Ankur Ankan
@ankurankan
@sandeep-n Hey, that would be great to have in pgmpy. But ideally I wanted to have the implementation using statsmodels mainly because of the implementations for different fit metrics . With sklearn we will have to write our own methods to compute these. Do you think it would be possible for you to use statsmodels instead of sklearn? Else if you open a PR with your current implementation, I can work on it to use statsmodels.
Sandeep Narayanaswami
@sandeep-n
@ankurankan Sounds good, I should be able to port it to statsmodels instead.
Ankur Ankan
@ankurankan
@sandeep-n Great. Let me know if I can help in any way :)
IshayTelavivi
@IshayTelavivi
Hi! I am new to pgmpy. I have created a model, generated the cpds and made a prediction, which is what I needed, and this was cool. However there are two things I am struggling with: 1. 'predict_probability' doesn't work. It gives me an index error (IndexError: index 11 is out of bounds for axis 0 with size 9), I can't figure out why. Standard predict works fine. 2. I couldn't find any reference for using a latent variable. How can include a latent variable? How do I establish its cpd? Suppose my latent variable is "C", and is the outcome of "A" and "B", and the outcome of "C" is "D". How do I combine everything?. Thanks
Ankur Ankan
@ankurankan
@IshayTelavivi Could you share your code so that I can reproduce the error? Maybe create an issue for it. Currently pgmpy doesn't support latent variables so you won't be able to do that right now.
felixleopoldo
@felixleopoldo
Hi, I want to import the data from the standard datasets in th R-library blnearn, http://www.bnlearn.com/bnrepository/. Did anyone do this before?
Ankur Ankan
@ankurankan
@felixleopoldo If you mean that you want to import the models, you can use the pgmpy.readwrite.BIFReader for the BIF format files.
Tomislav Kovačević
@tomkovna

Hello,
Is there any way to update already defined CPD for given model, with new data points with missing values? Here's an example of code (in order for you to understand more clearly what I'd like to do)

import numpy as np
import pandas as pd
import numpy as np
from pgmpy.models import BayesianModel
from pgmpy.estimators import MaximumLikelihoodEstimator, BayesianEstimator
from pgmpy.factors.discrete import TabularCPD

# define links between nodes
model = BayesianModel([('A', 'B'), ('C', 'B')])

# define some initial CPDs
cpd_a = TabularCPD(variable='A', variable_card=2,
                      values=[[0.9], [0.1]])
cpd_c = TabularCPD(variable='C', variable_card=2,
                       values=[[0.3], [0.7]])
cpd_b = TabularCPD(variable='B', variable_card=3,
                        values=[[0.2, 0.1, 0.25, 0.5],
                                [0.3, 0.5, 0.25, 0.3],
                                [0.5, 0.4, 0.5, 0.2]],
                        evidence=['A', 'C'],
                        evidence_card=[2, 2])

# Associating the parameters with the model structure.
model.add_cpds(cpd_a, cpd_b, cpd_c)

# Checking if the cpds are valid for the model.
model.check_model()

#generate some data, with C as missing value
raw_a = np.random.randint(low=0, high=2,size=100)
raw_b = np.random.randint(low=0, high=3,size=100)
raw_c = np.empty(100)
raw_c[:] = np.NaN
data = pd.DataFrame({"A" : raw_a, "B" : raw_b, "C" : raw_c})

# define pseudo counts according to initial cpds and variable cardinality
pseudo_counts = {'A': [[300], [700]], 'B': [[500,100,100,300], [100,500,300,400], [400,500,100,200]], 'C': [[200], [100]]}

# fit model with new data 
model.fit(data, complete_samples_only=False, estimator=BayesianEstimator, prior_type='dirichlet', pseudo_counts=pseudo_counts)

# print updated cpds
for cpd in model.get_cpds():
    print("CPD of {variable}:".format(variable=cpd.variable))
    print(cpd)

When i try this, i get an error message saying:
"ValueError: The shape of pseudo_counts must be: (3, 0)"

Ankur Ankan
@ankurankan
@tomkovna This should ideally work. Seems like a bug. I will try to fix this and get back to you. Thanks for reporting.