Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 17:36
    jbrockmendel edited #6581
  • 17:36
    jbrockmendel edited #13777
  • 17:28

    jreback on master

    DEPR: enforce deprecations in c… (compare)

  • 17:28
    jreback closed #29723
  • 17:28
    jreback commented #29723
  • 17:24
    alexander135 closed #29793
  • 17:22
    pep8speaks commented #29793
  • 17:22
    alexander135 opened #29793
  • 17:02
    jreback commented #29760
  • 17:02

    jreback on master

    add test for unused level raise… (compare)

  • 17:02
    jreback closed #29760
  • 17:02
    jreback closed #20410
  • 16:59
    jbrockmendel synchronize #29786
  • 16:59
    jreback milestoned #29760
  • 16:59
    jreback labeled #29760
  • 16:59
    jreback labeled #29760
  • 16:59
    jreback demilestoned #20410
  • 16:59
    jreback milestoned #20410
  • 16:58
    jbrockmendel commented #29766
  • 16:58
    jreback milestoned #21096
Tom Augspurger
@TomAugspurger
It’s a bit different I think...
I would think that skipna=False would mean that the mode of [1, None, 1] is NaN.
In [12]: pd.Series([1, 2, 1, None, None]).mode(dropna=True)
Out[12]:
0    1.0
dtype: float64

In [13]: pd.Series([1, 2, 1, None, None]).sum(skipna=False)
Out[13]: nan
Sorry, copy-pasted the wrong thing... But does that difference make sense?
Joris Van den Bossche
@jorisvandenbossche
Hmm, yep, that's a good point.
Tom Augspurger
@TomAugspurger
It is still inconsistent...
Joris Van den Bossche
@jorisvandenbossche
But, then the other way around: with 'dropna' (as how it is for value_counts), I would expect that it drops NAs from the result
Tom Augspurger
@TomAugspurger
Can you say that again? I believe that mode is consistent with dropna.
Joris Van den Bossche
@jorisvandenbossche
but of course, if the NA is the only mode, it does not make sense to drop it and return something empty?
Tom Augspurger
@TomAugspurger
In [18]: pd.Series([1, None, None]).value_counts(dropna=False)
Out[18]:
NaN     2
 1.0    1
dtype: int64

In [19]: pd.Series([1, None, None]).mode(dropna=False)
Out[19]:
0   NaN
dtype: float64
(afk for a little while)
Joris Van den Bossche
@jorisvandenbossche
OK, so it is indeed not the same behaviour as skipna, as otherwise the following should be NaN:
In [37]: pd.Series([1, 1, None]).mode(dropna=False)                                                                                                                             
Out[37]: 
0    1.0
dtype: float64
Still, I find it a bit confusing that it has a different behaviour as the NA skipping/propagation behaviour of the other reduction methods
Joris Van den Bossche
@jorisvandenbossche
@TomAugspurger while proofreading the whatsnew file, I saw some other things .. ahum (sorry that I did that :-))
But one thing: we seem to have added all the exotic plotting types to DataFrame/Series.plotas well
Tom Augspurger
@TomAugspurger
Oh, and there’s no whatsnew for that?
Joris Van den Bossche
@jorisvandenbossche
IMO, we really shouldn't expose things like "andrew_curves" too much (we should rather be deprecating them, they are only there for historical reasons)
Tom Augspurger
@TomAugspurger
Yeah, agreed.
Joris Van den Bossche
@jorisvandenbossche
No, no, there is a whatsnew (that how I noticed it)
I would have just objected on the PR if I would have seen it :-)
Tom Augspurger
@TomAugspurger
FYI, I probably won’t be able to tag things until ~7 hours from now, so no huge rush.
nitish gaddam
@nishgaddam_twitter
@nishgaddam_twitter
Hey guys, I am new to CVXPY and went through the documentation. I wrote couple maximization functions recently but am currently stuck on a way to design a constraint....I have my code and problem on Stackoverflow but not many people are able to answer it ..... https://stackoverflow.com/questions/54336137/how-to-define-variables-constrains-to-pandas-dataframe-when-using-cvxpy-for-opt/54356289#54356289
Would be awesome if u guys can just check it out and let me know where I am making a mistake
Tom Augspurger
@TomAugspurger
@datapythonista @jorisvandenbossche thoughts on pandas-dev/pandas#24451 and pandas-dev/pandas#24890
I say we do it.
And then tag.
Joris Van den Bossche
@jorisvandenbossche
yep
do you have thought on where to put the ecosystem page?
getting started or user guide?
if needed I can quickly move that
Tom Augspurger
@TomAugspurger
Not sure. I like youre idea of a curated “ecosystem” page.
Joris Van den Bossche
@jorisvandenbossche
Can also leave it toplevel for now, and decide on it later (then for now no redirect is needed for that one)
Tom Augspurger
@TomAugspurger
so maybe throw cookbook and ecosystem is user guide for now, and we can move it to toplevel?
Ah yeah. Let’s do that.
Toplevel, with a plan to clean things up.
To make it deserving of the top level :)
Joris Van den Bossche
@jorisvandenbossche
yep :)
Tom Augspurger
@TomAugspurger
pandas-dev/pandas#24909 should be done as well.
I may trim out some unneded redirects...
Joris Van den Bossche
@jorisvandenbossche
yes, I am just cleaning it up a little bit right now
Tom Augspurger
@TomAugspurger
Ah OK. Should I do pandas-dev/pandas#24890 then?
Joris Van den Bossche
@jorisvandenbossche
yep, that's fine
Tom Augspurger
@TomAugspurger
Moving ecosystem to the top level
k.
Joris Van den Bossche
@jorisvandenbossche
Is there a reason that IntervalArray is top-level while the other arrays are in pandas.arrays ?
Tom Augspurger
@TomAugspurger
Yeah, since it has the special constructors.
And doesn’t really fit into pd.array.
It should maybe be in both...
Joris Van den Bossche
@jorisvandenbossche
but still, somebody can do pd.arrays.IntervalArray.from_... as well, as now it is a bit inconsistent ?
Tom Augspurger
@TomAugspurger
Yeah, agreed.
Joris Van den Bossche
@jorisvandenbossche
(and there is also still pd.IntervalIndex.from_...
Tom Augspurger
@TomAugspurger
I’ll throw it in arrays too.