Hi there. Great tutorials! I have a specific question about the small n stats online tutorials for single-case charting. Its in regard to #019: subplots and multiple baselines 2. Its about creating a multiple baseline by moving all the charts into a single chart in a new tab (as was done in this tutorial). I have a load of data points so when I try to put all the charts together into a single chart, all the data points squash up together and it gets really messy. I cant widen the boarder of the original first chart that all the other charts go in to. Are there any solutions or alternatives that you could suggest?
Hi Stephen, could you possibly email the chart?
Excel is very particular, especially when nesting plots into other plots
can you send one with some fake data in it? I understand there's confidentiality issues if you send the actual data, but if you can send something random but similar it would be easier to replicate what youre seeing
Hi Shawn and Brent, I’m using the Windows version 1.8.2 Discounting Model Selector with MCQ data for 85 participants, entering data for small, medium and large reward magnitudes separately. I’ve entered the 9 delays for each reward magnitude in ascending order in cells A1:I1 (Delay Series). For the Value Series, participants’ monetary choices corresponding to the delays are entered in cells A2:I86 (1 participant/row). I’ve entered the maximum reward amount for each magnitude as the maximum value (i.e., small=35, medium=60, large=85). If I’m reading the output correctly, the Noise model is the most probable model listed for all or nearly all participants for each of the reward magnitudes. As a quick check, I used the Web Discounting Model Selector for several participants and am getting the same results (though noise model was not included as a probable model). I’m surprised by these results because I had previously computed k using the Kaplan et al. auto scorer and also the Gray et al. scoring syntax. Both indicated very high consistency among the responses selected. I want to make sure that I’m using the Discount Model Selector correctly, and would appreciate any insight or advice you might have. Thanks!
just to note, the MCQ responses are NOT interchangeable with indifference points
the DMS is made for indifference points (i.e., x, y)
if you have SSR/LLR comparisons and do not wish to defer to Kirby scoring (a la Brent’s template), you could potentially use Mike Young’s logistic regression method to model delay and reward sensitivity that way
Yup, I agree with Shawn. Id probably recommend reporting standard Kirby ks and supplement with the logistic approach. Should give you similar overall results.
Got it. Thanks to you both. I'm pretty new to this area of research and am trying to get up to speed on the basics. I've actually analyzed all of the data using Kirby's k, but wanted to make sure I hadn't missed something.