@fbparis More than naive I think, your question is central. I guess the general approach is trying to think about how the data was generated, visualizing data is very helpful to achieve this, but is not the only way to do it. Background knowledge can also provide you with the "intuition" about how parameters could be related and you should try to express those relationships in the models. Other elements you can add to the mix is "defaults", like choosing weakly informative priors, thinks that generally works at least as a first guess. Another point is tha model building is iterative so you do not have to guess in the first try, on the contrary that's generally never the case (unless you have being modeling the same type of data over and over again). You can start with very simple models and add complexity on step at a time and then check if that complexity pay off or does not add anything useful. Not sure this is helpful (you can try posting your question at https://discourse.pymc.io/).
About your second question, I guess the closer thing to that are non-parametric models, check this intro to Gaussian Process http://docs.pymc.io/gp.html to get a better feeling of what I am talking about.